2026-03-11 00:00:07.869885 | Job console starting 2026-03-11 00:00:07.894784 | Updating git repos 2026-03-11 00:00:08.121655 | Cloning repos into workspace 2026-03-11 00:00:08.387210 | Restoring repo states 2026-03-11 00:00:08.411490 | Merging changes 2026-03-11 00:00:08.411512 | Checking out repos 2026-03-11 00:00:08.881769 | Preparing playbooks 2026-03-11 00:00:09.893684 | Running Ansible setup 2026-03-11 00:00:18.692377 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-11 00:00:21.140343 | 2026-03-11 00:00:21.140483 | PLAY [Base pre] 2026-03-11 00:00:21.209576 | 2026-03-11 00:00:21.209766 | TASK [Setup log path fact] 2026-03-11 00:00:21.252809 | orchestrator | ok 2026-03-11 00:00:21.279477 | 2026-03-11 00:00:21.279611 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-11 00:00:21.369147 | orchestrator | ok 2026-03-11 00:00:21.403651 | 2026-03-11 00:00:21.403786 | TASK [emit-job-header : Print job information] 2026-03-11 00:00:21.526434 | # Job Information 2026-03-11 00:00:21.526599 | Ansible Version: 2.16.14 2026-03-11 00:00:21.526635 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-11 00:00:21.526669 | Pipeline: periodic-midnight 2026-03-11 00:00:21.526693 | Executor: 521e9411259a 2026-03-11 00:00:21.527061 | Triggered by: https://github.com/osism/testbed 2026-03-11 00:00:21.527094 | Event ID: 8eb8c26edea145abae90a2d897835dd2 2026-03-11 00:00:21.539123 | 2026-03-11 00:00:21.539349 | LOOP [emit-job-header : Print node information] 2026-03-11 00:00:21.702488 | orchestrator | ok: 2026-03-11 00:00:21.702655 | orchestrator | # Node Information 2026-03-11 00:00:21.702728 | orchestrator | Inventory Hostname: orchestrator 2026-03-11 00:00:21.702804 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-11 00:00:21.702827 | orchestrator | Username: zuul-testbed06 2026-03-11 00:00:21.702865 | orchestrator | Distro: Debian 12.13 2026-03-11 00:00:21.702885 | orchestrator | Provider: static-testbed 2026-03-11 00:00:21.702902 | orchestrator | Region: 2026-03-11 00:00:21.702920 | orchestrator | Label: testbed-orchestrator 2026-03-11 00:00:21.702936 | orchestrator | Product Name: OpenStack Nova 2026-03-11 00:00:21.702952 | orchestrator | Interface IP: 81.163.193.140 2026-03-11 00:00:21.723777 | 2026-03-11 00:00:21.723910 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-11 00:00:22.853629 | orchestrator -> localhost | changed 2026-03-11 00:00:22.861256 | 2026-03-11 00:00:22.861364 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-11 00:00:25.176814 | orchestrator -> localhost | changed 2026-03-11 00:00:25.204532 | 2026-03-11 00:00:25.204651 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-11 00:00:25.872867 | orchestrator -> localhost | ok 2026-03-11 00:00:25.879694 | 2026-03-11 00:00:25.879841 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-11 00:00:25.915828 | orchestrator | ok 2026-03-11 00:00:25.959024 | orchestrator | included: /var/lib/zuul/builds/f4dbef49419b430cbfedd1f7a77edb21/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-11 00:00:25.969428 | 2026-03-11 00:00:25.969529 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-11 00:00:29.874364 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-11 00:00:29.874559 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/f4dbef49419b430cbfedd1f7a77edb21/work/f4dbef49419b430cbfedd1f7a77edb21_id_rsa 2026-03-11 00:00:29.874599 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/f4dbef49419b430cbfedd1f7a77edb21/work/f4dbef49419b430cbfedd1f7a77edb21_id_rsa.pub 2026-03-11 00:00:29.874627 | orchestrator -> localhost | The key fingerprint is: 2026-03-11 00:00:29.874656 | orchestrator -> localhost | SHA256:4xTtuwjzp2doyO4MDbwIIlqhgR7gl9GnR7gIGqN1xys zuul-build-sshkey 2026-03-11 00:00:29.874680 | orchestrator -> localhost | The key's randomart image is: 2026-03-11 00:00:29.874736 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-11 00:00:29.874762 | orchestrator -> localhost | |. .... | 2026-03-11 00:00:29.874784 | orchestrator -> localhost | |*.. +ooo . | 2026-03-11 00:00:29.874806 | orchestrator -> localhost | |=*o+..=.. . | 2026-03-11 00:00:29.874827 | orchestrator -> localhost | |+oo+Eo.. o | 2026-03-11 00:00:29.874862 | orchestrator -> localhost | |=.. o.. S . | 2026-03-11 00:00:29.874888 | orchestrator -> localhost | |+o . + o . . | 2026-03-11 00:00:29.874909 | orchestrator -> localhost | |. . o.+.... | 2026-03-11 00:00:29.874930 | orchestrator -> localhost | | oo+o.+. | 2026-03-11 00:00:29.874951 | orchestrator -> localhost | | o+.+=. | 2026-03-11 00:00:29.874971 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-11 00:00:29.875029 | orchestrator -> localhost | ok: Runtime: 0:00:02.614434 2026-03-11 00:00:29.886670 | 2026-03-11 00:00:29.886804 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-11 00:00:29.943236 | orchestrator | ok 2026-03-11 00:00:29.973976 | orchestrator | included: /var/lib/zuul/builds/f4dbef49419b430cbfedd1f7a77edb21/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-11 00:00:30.025574 | 2026-03-11 00:00:30.025675 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-11 00:00:30.114722 | orchestrator | skipping: Conditional result was False 2026-03-11 00:00:30.121754 | 2026-03-11 00:00:30.121846 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-11 00:00:31.040563 | orchestrator | changed 2026-03-11 00:00:31.046088 | 2026-03-11 00:00:31.046181 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-11 00:00:31.384658 | orchestrator | ok 2026-03-11 00:00:31.389967 | 2026-03-11 00:00:31.390060 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-11 00:00:31.946525 | orchestrator | ok 2026-03-11 00:00:31.953915 | 2026-03-11 00:00:31.954020 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-11 00:00:32.395446 | orchestrator | ok 2026-03-11 00:00:32.412272 | 2026-03-11 00:00:32.412387 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-11 00:00:32.455040 | orchestrator | skipping: Conditional result was False 2026-03-11 00:00:32.471349 | 2026-03-11 00:00:32.471464 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-11 00:00:33.756681 | orchestrator -> localhost | changed 2026-03-11 00:00:33.773420 | 2026-03-11 00:00:33.773527 | TASK [add-build-sshkey : Add back temp key] 2026-03-11 00:00:34.715794 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/f4dbef49419b430cbfedd1f7a77edb21/work/f4dbef49419b430cbfedd1f7a77edb21_id_rsa (zuul-build-sshkey) 2026-03-11 00:00:34.715994 | orchestrator -> localhost | ok: Runtime: 0:00:00.030647 2026-03-11 00:00:34.723034 | 2026-03-11 00:00:34.723128 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-11 00:00:35.585589 | orchestrator | ok 2026-03-11 00:00:35.591378 | 2026-03-11 00:00:35.608783 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-11 00:00:35.667364 | orchestrator | skipping: Conditional result was False 2026-03-11 00:00:35.800924 | 2026-03-11 00:00:35.801038 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-11 00:00:36.529224 | orchestrator | ok 2026-03-11 00:00:36.560469 | 2026-03-11 00:00:36.560592 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-11 00:00:36.638750 | orchestrator | ok 2026-03-11 00:00:36.659266 | 2026-03-11 00:00:36.659381 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-11 00:00:37.742445 | orchestrator -> localhost | ok 2026-03-11 00:00:37.749655 | 2026-03-11 00:00:37.749763 | TASK [validate-host : Collect information about the host] 2026-03-11 00:00:39.536740 | orchestrator | ok 2026-03-11 00:00:39.566438 | 2026-03-11 00:00:39.566572 | TASK [validate-host : Sanitize hostname] 2026-03-11 00:00:39.669820 | orchestrator | ok 2026-03-11 00:00:39.679887 | 2026-03-11 00:00:39.679995 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-11 00:00:41.783596 | orchestrator -> localhost | changed 2026-03-11 00:00:41.789793 | 2026-03-11 00:00:41.789900 | TASK [validate-host : Collect information about zuul worker] 2026-03-11 00:00:42.518039 | orchestrator | ok 2026-03-11 00:00:42.523241 | 2026-03-11 00:00:42.523349 | TASK [validate-host : Write out all zuul information for each host] 2026-03-11 00:00:44.015762 | orchestrator -> localhost | changed 2026-03-11 00:00:44.025220 | 2026-03-11 00:00:44.025313 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-11 00:00:44.337527 | orchestrator | ok 2026-03-11 00:00:44.342459 | 2026-03-11 00:00:44.342535 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-11 00:02:07.395851 | orchestrator | changed: 2026-03-11 00:02:07.396081 | orchestrator | .d..t...... src/ 2026-03-11 00:02:07.396116 | orchestrator | .d..t...... src/github.com/ 2026-03-11 00:02:07.396141 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-11 00:02:07.396163 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-11 00:02:07.396185 | orchestrator | RedHat.yml 2026-03-11 00:02:07.410943 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-11 00:02:07.410961 | orchestrator | RedHat.yml 2026-03-11 00:02:07.411025 | orchestrator | = 1.53.0"... 2026-03-11 00:02:19.020045 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-11 00:02:19.037790 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-11 00:02:19.187975 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-11 00:02:20.104236 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-11 00:02:20.170043 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-11 00:02:20.689358 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-11 00:02:20.751443 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-11 00:02:21.241659 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-11 00:02:21.241743 | orchestrator | 2026-03-11 00:02:21.241750 | orchestrator | Providers are signed by their developers. 2026-03-11 00:02:21.241756 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-11 00:02:21.241767 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-11 00:02:21.241801 | orchestrator | 2026-03-11 00:02:21.241807 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-11 00:02:21.241811 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-11 00:02:21.241823 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-11 00:02:21.241833 | orchestrator | you run "tofu init" in the future. 2026-03-11 00:02:21.242269 | orchestrator | 2026-03-11 00:02:21.242313 | orchestrator | OpenTofu has been successfully initialized! 2026-03-11 00:02:21.242334 | orchestrator | 2026-03-11 00:02:21.242339 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-11 00:02:21.242344 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-11 00:02:21.242348 | orchestrator | should now work. 2026-03-11 00:02:21.242352 | orchestrator | 2026-03-11 00:02:21.242356 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-11 00:02:21.242360 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-11 00:02:21.242371 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-11 00:02:21.449370 | orchestrator | Created and switched to workspace "ci"! 2026-03-11 00:02:21.449460 | orchestrator | 2026-03-11 00:02:21.449469 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-11 00:02:21.449474 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-11 00:02:21.449478 | orchestrator | for this configuration. 2026-03-11 00:02:21.615085 | orchestrator | ci.auto.tfvars 2026-03-11 00:02:21.618678 | orchestrator | default_custom.tf 2026-03-11 00:02:22.542115 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-11 00:02:23.067815 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-11 00:02:23.320625 | orchestrator | 2026-03-11 00:02:23.320707 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-11 00:02:23.320715 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-11 00:02:23.320742 | orchestrator | + create 2026-03-11 00:02:23.320759 | orchestrator | <= read (data resources) 2026-03-11 00:02:23.320772 | orchestrator | 2026-03-11 00:02:23.320777 | orchestrator | OpenTofu will perform the following actions: 2026-03-11 00:02:23.320881 | orchestrator | 2026-03-11 00:02:23.320896 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-11 00:02:23.320901 | orchestrator | # (config refers to values not yet known) 2026-03-11 00:02:23.320905 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-11 00:02:23.320909 | orchestrator | + checksum = (known after apply) 2026-03-11 00:02:23.320914 | orchestrator | + created_at = (known after apply) 2026-03-11 00:02:23.320918 | orchestrator | + file = (known after apply) 2026-03-11 00:02:23.320922 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.320942 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.320946 | orchestrator | + min_disk_gb = (known after apply) 2026-03-11 00:02:23.320950 | orchestrator | + min_ram_mb = (known after apply) 2026-03-11 00:02:23.320954 | orchestrator | + most_recent = true 2026-03-11 00:02:23.320958 | orchestrator | + name = (known after apply) 2026-03-11 00:02:23.320962 | orchestrator | + protected = (known after apply) 2026-03-11 00:02:23.320966 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.320973 | orchestrator | + schema = (known after apply) 2026-03-11 00:02:23.320977 | orchestrator | + size_bytes = (known after apply) 2026-03-11 00:02:23.320981 | orchestrator | + tags = (known after apply) 2026-03-11 00:02:23.320985 | orchestrator | + updated_at = (known after apply) 2026-03-11 00:02:23.320989 | orchestrator | } 2026-03-11 00:02:23.321068 | orchestrator | 2026-03-11 00:02:23.321081 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-11 00:02:23.321085 | orchestrator | # (config refers to values not yet known) 2026-03-11 00:02:23.321090 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-11 00:02:23.321094 | orchestrator | + checksum = (known after apply) 2026-03-11 00:02:23.321097 | orchestrator | + created_at = (known after apply) 2026-03-11 00:02:23.321101 | orchestrator | + file = (known after apply) 2026-03-11 00:02:23.321105 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.321109 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.321113 | orchestrator | + min_disk_gb = (known after apply) 2026-03-11 00:02:23.321117 | orchestrator | + min_ram_mb = (known after apply) 2026-03-11 00:02:23.321121 | orchestrator | + most_recent = true 2026-03-11 00:02:23.321125 | orchestrator | + name = (known after apply) 2026-03-11 00:02:23.321128 | orchestrator | + protected = (known after apply) 2026-03-11 00:02:23.321132 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.321136 | orchestrator | + schema = (known after apply) 2026-03-11 00:02:23.321140 | orchestrator | + size_bytes = (known after apply) 2026-03-11 00:02:23.321144 | orchestrator | + tags = (known after apply) 2026-03-11 00:02:23.321148 | orchestrator | + updated_at = (known after apply) 2026-03-11 00:02:23.321152 | orchestrator | } 2026-03-11 00:02:23.321227 | orchestrator | 2026-03-11 00:02:23.321239 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-11 00:02:23.321244 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-11 00:02:23.321249 | orchestrator | + content = (known after apply) 2026-03-11 00:02:23.321253 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-11 00:02:23.321257 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-11 00:02:23.321260 | orchestrator | + content_md5 = (known after apply) 2026-03-11 00:02:23.321264 | orchestrator | + content_sha1 = (known after apply) 2026-03-11 00:02:23.321268 | orchestrator | + content_sha256 = (known after apply) 2026-03-11 00:02:23.321272 | orchestrator | + content_sha512 = (known after apply) 2026-03-11 00:02:23.321275 | orchestrator | + directory_permission = "0777" 2026-03-11 00:02:23.321279 | orchestrator | + file_permission = "0644" 2026-03-11 00:02:23.321283 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-11 00:02:23.321287 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.321290 | orchestrator | } 2026-03-11 00:02:23.321355 | orchestrator | 2026-03-11 00:02:23.321367 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-11 00:02:23.321371 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-11 00:02:23.321375 | orchestrator | + content = (known after apply) 2026-03-11 00:02:23.321379 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-11 00:02:23.321383 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-11 00:02:23.321387 | orchestrator | + content_md5 = (known after apply) 2026-03-11 00:02:23.321390 | orchestrator | + content_sha1 = (known after apply) 2026-03-11 00:02:23.321394 | orchestrator | + content_sha256 = (known after apply) 2026-03-11 00:02:23.321398 | orchestrator | + content_sha512 = (known after apply) 2026-03-11 00:02:23.321402 | orchestrator | + directory_permission = "0777" 2026-03-11 00:02:23.321405 | orchestrator | + file_permission = "0644" 2026-03-11 00:02:23.321413 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-11 00:02:23.321417 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.321421 | orchestrator | } 2026-03-11 00:02:23.321487 | orchestrator | 2026-03-11 00:02:23.321504 | orchestrator | # local_file.inventory will be created 2026-03-11 00:02:23.321509 | orchestrator | + resource "local_file" "inventory" { 2026-03-11 00:02:23.321513 | orchestrator | + content = (known after apply) 2026-03-11 00:02:23.321516 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-11 00:02:23.321520 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-11 00:02:23.321524 | orchestrator | + content_md5 = (known after apply) 2026-03-11 00:02:23.321528 | orchestrator | + content_sha1 = (known after apply) 2026-03-11 00:02:23.321532 | orchestrator | + content_sha256 = (known after apply) 2026-03-11 00:02:23.321536 | orchestrator | + content_sha512 = (known after apply) 2026-03-11 00:02:23.321539 | orchestrator | + directory_permission = "0777" 2026-03-11 00:02:23.321543 | orchestrator | + file_permission = "0644" 2026-03-11 00:02:23.321547 | orchestrator | + filename = "inventory.ci" 2026-03-11 00:02:23.321551 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.321554 | orchestrator | } 2026-03-11 00:02:23.321646 | orchestrator | 2026-03-11 00:02:23.321660 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-11 00:02:23.321665 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-11 00:02:23.321669 | orchestrator | + content = (sensitive value) 2026-03-11 00:02:23.321673 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-11 00:02:23.321677 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-11 00:02:23.321681 | orchestrator | + content_md5 = (known after apply) 2026-03-11 00:02:23.321684 | orchestrator | + content_sha1 = (known after apply) 2026-03-11 00:02:23.321688 | orchestrator | + content_sha256 = (known after apply) 2026-03-11 00:02:23.321692 | orchestrator | + content_sha512 = (known after apply) 2026-03-11 00:02:23.321696 | orchestrator | + directory_permission = "0700" 2026-03-11 00:02:23.321699 | orchestrator | + file_permission = "0600" 2026-03-11 00:02:23.321703 | orchestrator | + filename = ".id_rsa.ci" 2026-03-11 00:02:23.321707 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.321711 | orchestrator | } 2026-03-11 00:02:23.321751 | orchestrator | 2026-03-11 00:02:23.321763 | orchestrator | # null_resource.node_semaphore will be created 2026-03-11 00:02:23.321767 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-11 00:02:23.321771 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.321775 | orchestrator | } 2026-03-11 00:02:23.322041 | orchestrator | 2026-03-11 00:02:23.322079 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-11 00:02:23.322084 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-11 00:02:23.322088 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.322092 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.322124 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.322129 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:23.322132 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.322136 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-11 00:02:23.322156 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.322188 | orchestrator | + size = 80 2026-03-11 00:02:23.322200 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.322213 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.322217 | orchestrator | } 2026-03-11 00:02:23.322449 | orchestrator | 2026-03-11 00:02:23.322471 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-11 00:02:23.322486 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-11 00:02:23.322490 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.322493 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.322497 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.322547 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:23.322553 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.322557 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-11 00:02:23.322561 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.322565 | orchestrator | + size = 80 2026-03-11 00:02:23.322584 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.322624 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.322659 | orchestrator | } 2026-03-11 00:02:23.322752 | orchestrator | 2026-03-11 00:02:23.322810 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-11 00:02:23.322828 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-11 00:02:23.322877 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.322889 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.322903 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.322915 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:23.322919 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.322930 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-11 00:02:23.322934 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.322992 | orchestrator | + size = 80 2026-03-11 00:02:23.323005 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.323042 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.323077 | orchestrator | } 2026-03-11 00:02:23.323223 | orchestrator | 2026-03-11 00:02:23.323236 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-11 00:02:23.323240 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-11 00:02:23.323244 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.323248 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.323252 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.323256 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:23.323259 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.323301 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-11 00:02:23.323306 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.323318 | orchestrator | + size = 80 2026-03-11 00:02:23.323363 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.323375 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.323379 | orchestrator | } 2026-03-11 00:02:23.323593 | orchestrator | 2026-03-11 00:02:23.323646 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-11 00:02:23.323665 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-11 00:02:23.323670 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.323673 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.323677 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.323681 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:23.323685 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.323694 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-11 00:02:23.323698 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.323702 | orchestrator | + size = 80 2026-03-11 00:02:23.323706 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.323709 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.323713 | orchestrator | } 2026-03-11 00:02:23.323804 | orchestrator | 2026-03-11 00:02:23.323843 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-11 00:02:23.323847 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-11 00:02:23.323851 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.323855 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.323859 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.323877 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:23.323888 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.323892 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-11 00:02:23.323896 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.323900 | orchestrator | + size = 80 2026-03-11 00:02:23.323904 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.323907 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.323911 | orchestrator | } 2026-03-11 00:02:23.323982 | orchestrator | 2026-03-11 00:02:23.323995 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-11 00:02:23.323999 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-11 00:02:23.324003 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.324007 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.324011 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.324014 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:23.324018 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.324022 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-11 00:02:23.324026 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.324030 | orchestrator | + size = 80 2026-03-11 00:02:23.324033 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.324037 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.324041 | orchestrator | } 2026-03-11 00:02:23.324103 | orchestrator | 2026-03-11 00:02:23.324115 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-11 00:02:23.324120 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:23.324124 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.324128 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.324132 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.324136 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.324140 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-11 00:02:23.324143 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.324147 | orchestrator | + size = 20 2026-03-11 00:02:23.324151 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.324155 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.324159 | orchestrator | } 2026-03-11 00:02:23.324223 | orchestrator | 2026-03-11 00:02:23.324235 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-11 00:02:23.324240 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:23.324244 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.324247 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.324251 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.324255 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.324259 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-11 00:02:23.324262 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.324266 | orchestrator | + size = 20 2026-03-11 00:02:23.324270 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.324274 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.324278 | orchestrator | } 2026-03-11 00:02:23.324376 | orchestrator | 2026-03-11 00:02:23.324424 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-11 00:02:23.324429 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:23.324443 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.324447 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.324451 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.324454 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.324458 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-11 00:02:23.324462 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.324471 | orchestrator | + size = 20 2026-03-11 00:02:23.324496 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.324501 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.324504 | orchestrator | } 2026-03-11 00:02:23.324647 | orchestrator | 2026-03-11 00:02:23.324661 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-11 00:02:23.324665 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:23.324669 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.324673 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.324676 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.324680 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.324706 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-11 00:02:23.324711 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.324715 | orchestrator | + size = 20 2026-03-11 00:02:23.324718 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.324722 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.324726 | orchestrator | } 2026-03-11 00:02:23.324844 | orchestrator | 2026-03-11 00:02:23.324857 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-11 00:02:23.324861 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:23.324865 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.324869 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.324873 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.324877 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.324889 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-11 00:02:23.324900 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.324916 | orchestrator | + size = 20 2026-03-11 00:02:23.324927 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.324943 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.324947 | orchestrator | } 2026-03-11 00:02:23.325092 | orchestrator | 2026-03-11 00:02:23.325114 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-11 00:02:23.325118 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:23.325137 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.325181 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.325193 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.325197 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.325208 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-11 00:02:23.325220 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.325224 | orchestrator | + size = 20 2026-03-11 00:02:23.325228 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.325239 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.325243 | orchestrator | } 2026-03-11 00:02:23.325379 | orchestrator | 2026-03-11 00:02:23.325455 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-11 00:02:23.325480 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:23.325485 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.325496 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.325541 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.325545 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.325549 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-11 00:02:23.325553 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.325556 | orchestrator | + size = 20 2026-03-11 00:02:23.325560 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.325598 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.325656 | orchestrator | } 2026-03-11 00:02:23.325808 | orchestrator | 2026-03-11 00:02:23.325877 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-11 00:02:23.325882 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:23.325931 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.325945 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.325957 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.325968 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.325972 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-11 00:02:23.325976 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.325980 | orchestrator | + size = 20 2026-03-11 00:02:23.325993 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.325997 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.326000 | orchestrator | } 2026-03-11 00:02:23.326169 | orchestrator | 2026-03-11 00:02:23.326199 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-11 00:02:23.326213 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:23.326217 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:23.326221 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.326224 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.326228 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:23.326232 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-11 00:02:23.326270 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.326274 | orchestrator | + size = 20 2026-03-11 00:02:23.326278 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:23.326296 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:23.326308 | orchestrator | } 2026-03-11 00:02:23.326910 | orchestrator | 2026-03-11 00:02:23.326941 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-11 00:02:23.326946 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-11 00:02:23.327000 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-11 00:02:23.327004 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-11 00:02:23.327008 | orchestrator | + all_metadata = (known after apply) 2026-03-11 00:02:23.327012 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.327030 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.327042 | orchestrator | + config_drive = true 2026-03-11 00:02:23.327046 | orchestrator | + created = (known after apply) 2026-03-11 00:02:23.327058 | orchestrator | + flavor_id = (known after apply) 2026-03-11 00:02:23.327065 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-11 00:02:23.327069 | orchestrator | + force_delete = false 2026-03-11 00:02:23.327073 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-11 00:02:23.327077 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.327081 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:23.327085 | orchestrator | + image_name = (known after apply) 2026-03-11 00:02:23.327088 | orchestrator | + key_pair = "testbed" 2026-03-11 00:02:23.327092 | orchestrator | + name = "testbed-manager" 2026-03-11 00:02:23.327096 | orchestrator | + power_state = "active" 2026-03-11 00:02:23.327100 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.327104 | orchestrator | + security_groups = (known after apply) 2026-03-11 00:02:23.327107 | orchestrator | + stop_before_destroy = false 2026-03-11 00:02:23.327111 | orchestrator | + updated = (known after apply) 2026-03-11 00:02:23.327115 | orchestrator | + user_data = (sensitive value) 2026-03-11 00:02:23.327119 | orchestrator | 2026-03-11 00:02:23.327123 | orchestrator | + block_device { 2026-03-11 00:02:23.327127 | orchestrator | + boot_index = 0 2026-03-11 00:02:23.327131 | orchestrator | + delete_on_termination = false 2026-03-11 00:02:23.327140 | orchestrator | + destination_type = "volume" 2026-03-11 00:02:23.327144 | orchestrator | + multiattach = false 2026-03-11 00:02:23.327147 | orchestrator | + source_type = "volume" 2026-03-11 00:02:23.327151 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:23.327160 | orchestrator | } 2026-03-11 00:02:23.327165 | orchestrator | 2026-03-11 00:02:23.327169 | orchestrator | + network { 2026-03-11 00:02:23.327172 | orchestrator | + access_network = false 2026-03-11 00:02:23.327176 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-11 00:02:23.327180 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-11 00:02:23.327184 | orchestrator | + mac = (known after apply) 2026-03-11 00:02:23.327187 | orchestrator | + name = (known after apply) 2026-03-11 00:02:23.327191 | orchestrator | + port = (known after apply) 2026-03-11 00:02:23.327195 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:23.327199 | orchestrator | } 2026-03-11 00:02:23.327203 | orchestrator | } 2026-03-11 00:02:23.327393 | orchestrator | 2026-03-11 00:02:23.327405 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-11 00:02:23.327420 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-11 00:02:23.327424 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-11 00:02:23.327428 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-11 00:02:23.327432 | orchestrator | + all_metadata = (known after apply) 2026-03-11 00:02:23.327436 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.327439 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.327493 | orchestrator | + config_drive = true 2026-03-11 00:02:23.327498 | orchestrator | + created = (known after apply) 2026-03-11 00:02:23.327501 | orchestrator | + flavor_id = (known after apply) 2026-03-11 00:02:23.327505 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-11 00:02:23.327509 | orchestrator | + force_delete = false 2026-03-11 00:02:23.327538 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-11 00:02:23.327550 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.327563 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:23.327567 | orchestrator | + image_name = (known after apply) 2026-03-11 00:02:23.327571 | orchestrator | + key_pair = "testbed" 2026-03-11 00:02:23.327575 | orchestrator | + name = "testbed-node-0" 2026-03-11 00:02:23.327579 | orchestrator | + power_state = "active" 2026-03-11 00:02:23.327583 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.327624 | orchestrator | + security_groups = (known after apply) 2026-03-11 00:02:23.327654 | orchestrator | + stop_before_destroy = false 2026-03-11 00:02:23.327658 | orchestrator | + updated = (known after apply) 2026-03-11 00:02:23.327662 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-11 00:02:23.327666 | orchestrator | 2026-03-11 00:02:23.327695 | orchestrator | + block_device { 2026-03-11 00:02:23.327740 | orchestrator | + boot_index = 0 2026-03-11 00:02:23.327745 | orchestrator | + delete_on_termination = false 2026-03-11 00:02:23.327748 | orchestrator | + destination_type = "volume" 2026-03-11 00:02:23.327752 | orchestrator | + multiattach = false 2026-03-11 00:02:23.327756 | orchestrator | + source_type = "volume" 2026-03-11 00:02:23.327763 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:23.327767 | orchestrator | } 2026-03-11 00:02:23.327771 | orchestrator | 2026-03-11 00:02:23.327774 | orchestrator | + network { 2026-03-11 00:02:23.327778 | orchestrator | + access_network = false 2026-03-11 00:02:23.327782 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-11 00:02:23.327786 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-11 00:02:23.327790 | orchestrator | + mac = (known after apply) 2026-03-11 00:02:23.327794 | orchestrator | + name = (known after apply) 2026-03-11 00:02:23.327798 | orchestrator | + port = (known after apply) 2026-03-11 00:02:23.327801 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:23.327805 | orchestrator | } 2026-03-11 00:02:23.327809 | orchestrator | } 2026-03-11 00:02:23.328197 | orchestrator | 2026-03-11 00:02:23.328229 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-11 00:02:23.328242 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-11 00:02:23.328253 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-11 00:02:23.328263 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-11 00:02:23.328267 | orchestrator | + all_metadata = (known after apply) 2026-03-11 00:02:23.328280 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.328284 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.328288 | orchestrator | + config_drive = true 2026-03-11 00:02:23.328300 | orchestrator | + created = (known after apply) 2026-03-11 00:02:23.328305 | orchestrator | + flavor_id = (known after apply) 2026-03-11 00:02:23.328308 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-11 00:02:23.328312 | orchestrator | + force_delete = false 2026-03-11 00:02:23.328325 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-11 00:02:23.328329 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.328333 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:23.328336 | orchestrator | + image_name = (known after apply) 2026-03-11 00:02:23.328340 | orchestrator | + key_pair = "testbed" 2026-03-11 00:02:23.328381 | orchestrator | + name = "testbed-node-1" 2026-03-11 00:02:23.328385 | orchestrator | + power_state = "active" 2026-03-11 00:02:23.328389 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.328393 | orchestrator | + security_groups = (known after apply) 2026-03-11 00:02:23.328397 | orchestrator | + stop_before_destroy = false 2026-03-11 00:02:23.328401 | orchestrator | + updated = (known after apply) 2026-03-11 00:02:23.328404 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-11 00:02:23.328408 | orchestrator | 2026-03-11 00:02:23.328412 | orchestrator | + block_device { 2026-03-11 00:02:23.328416 | orchestrator | + boot_index = 0 2026-03-11 00:02:23.328420 | orchestrator | + delete_on_termination = false 2026-03-11 00:02:23.328424 | orchestrator | + destination_type = "volume" 2026-03-11 00:02:23.328452 | orchestrator | + multiattach = false 2026-03-11 00:02:23.328457 | orchestrator | + source_type = "volume" 2026-03-11 00:02:23.328460 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:23.328464 | orchestrator | } 2026-03-11 00:02:23.328468 | orchestrator | 2026-03-11 00:02:23.328472 | orchestrator | + network { 2026-03-11 00:02:23.328502 | orchestrator | + access_network = false 2026-03-11 00:02:23.328514 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-11 00:02:23.328519 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-11 00:02:23.328522 | orchestrator | + mac = (known after apply) 2026-03-11 00:02:23.328535 | orchestrator | + name = (known after apply) 2026-03-11 00:02:23.328576 | orchestrator | + port = (known after apply) 2026-03-11 00:02:23.328580 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:23.328584 | orchestrator | } 2026-03-11 00:02:23.328588 | orchestrator | } 2026-03-11 00:02:23.329149 | orchestrator | 2026-03-11 00:02:23.329164 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-11 00:02:23.329169 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-11 00:02:23.329173 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-11 00:02:23.329177 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-11 00:02:23.329182 | orchestrator | + all_metadata = (known after apply) 2026-03-11 00:02:23.329211 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.329220 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.329224 | orchestrator | + config_drive = true 2026-03-11 00:02:23.329228 | orchestrator | + created = (known after apply) 2026-03-11 00:02:23.329232 | orchestrator | + flavor_id = (known after apply) 2026-03-11 00:02:23.329262 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-11 00:02:23.329269 | orchestrator | + force_delete = false 2026-03-11 00:02:23.329273 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-11 00:02:23.329277 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.329281 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:23.329290 | orchestrator | + image_name = (known after apply) 2026-03-11 00:02:23.329293 | orchestrator | + key_pair = "testbed" 2026-03-11 00:02:23.329297 | orchestrator | + name = "testbed-node-2" 2026-03-11 00:02:23.329301 | orchestrator | + power_state = "active" 2026-03-11 00:02:23.329305 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.329309 | orchestrator | + security_groups = (known after apply) 2026-03-11 00:02:23.329312 | orchestrator | + stop_before_destroy = false 2026-03-11 00:02:23.329316 | orchestrator | + updated = (known after apply) 2026-03-11 00:02:23.329320 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-11 00:02:23.329324 | orchestrator | 2026-03-11 00:02:23.329328 | orchestrator | + block_device { 2026-03-11 00:02:23.329332 | orchestrator | + boot_index = 0 2026-03-11 00:02:23.329336 | orchestrator | + delete_on_termination = false 2026-03-11 00:02:23.329339 | orchestrator | + destination_type = "volume" 2026-03-11 00:02:23.329343 | orchestrator | + multiattach = false 2026-03-11 00:02:23.329347 | orchestrator | + source_type = "volume" 2026-03-11 00:02:23.329351 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:23.329355 | orchestrator | } 2026-03-11 00:02:23.329359 | orchestrator | 2026-03-11 00:02:23.329362 | orchestrator | + network { 2026-03-11 00:02:23.329366 | orchestrator | + access_network = false 2026-03-11 00:02:23.329370 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-11 00:02:23.329374 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-11 00:02:23.329378 | orchestrator | + mac = (known after apply) 2026-03-11 00:02:23.329381 | orchestrator | + name = (known after apply) 2026-03-11 00:02:23.329385 | orchestrator | + port = (known after apply) 2026-03-11 00:02:23.329389 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:23.329393 | orchestrator | } 2026-03-11 00:02:23.329397 | orchestrator | } 2026-03-11 00:02:23.329792 | orchestrator | 2026-03-11 00:02:23.329860 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-11 00:02:23.329865 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-11 00:02:23.329869 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-11 00:02:23.329873 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-11 00:02:23.329877 | orchestrator | + all_metadata = (known after apply) 2026-03-11 00:02:23.329881 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.329885 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.329888 | orchestrator | + config_drive = true 2026-03-11 00:02:23.329892 | orchestrator | + created = (known after apply) 2026-03-11 00:02:23.329896 | orchestrator | + flavor_id = (known after apply) 2026-03-11 00:02:23.329900 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-11 00:02:23.329903 | orchestrator | + force_delete = false 2026-03-11 00:02:23.329907 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-11 00:02:23.329911 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.329915 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:23.329919 | orchestrator | + image_name = (known after apply) 2026-03-11 00:02:23.329922 | orchestrator | + key_pair = "testbed" 2026-03-11 00:02:23.329926 | orchestrator | + name = "testbed-node-3" 2026-03-11 00:02:23.329930 | orchestrator | + power_state = "active" 2026-03-11 00:02:23.329934 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.329937 | orchestrator | + security_groups = (known after apply) 2026-03-11 00:02:23.329941 | orchestrator | + stop_before_destroy = false 2026-03-11 00:02:23.329945 | orchestrator | + updated = (known after apply) 2026-03-11 00:02:23.329949 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-11 00:02:23.329952 | orchestrator | 2026-03-11 00:02:23.329956 | orchestrator | + block_device { 2026-03-11 00:02:23.329964 | orchestrator | + boot_index = 0 2026-03-11 00:02:23.329968 | orchestrator | + delete_on_termination = false 2026-03-11 00:02:23.329971 | orchestrator | + destination_type = "volume" 2026-03-11 00:02:23.329980 | orchestrator | + multiattach = false 2026-03-11 00:02:23.329983 | orchestrator | + source_type = "volume" 2026-03-11 00:02:23.329987 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:23.329991 | orchestrator | } 2026-03-11 00:02:23.329995 | orchestrator | 2026-03-11 00:02:23.329999 | orchestrator | + network { 2026-03-11 00:02:23.330002 | orchestrator | + access_network = false 2026-03-11 00:02:23.330006 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-11 00:02:23.330010 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-11 00:02:23.330034 | orchestrator | + mac = (known after apply) 2026-03-11 00:02:23.330039 | orchestrator | + name = (known after apply) 2026-03-11 00:02:23.330043 | orchestrator | + port = (known after apply) 2026-03-11 00:02:23.330046 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:23.330050 | orchestrator | } 2026-03-11 00:02:23.330054 | orchestrator | } 2026-03-11 00:02:23.330376 | orchestrator | 2026-03-11 00:02:23.330391 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-11 00:02:23.330395 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-11 00:02:23.330399 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-11 00:02:23.330403 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-11 00:02:23.330407 | orchestrator | + all_metadata = (known after apply) 2026-03-11 00:02:23.330411 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.330445 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.330449 | orchestrator | + config_drive = true 2026-03-11 00:02:23.330453 | orchestrator | + created = (known after apply) 2026-03-11 00:02:23.330457 | orchestrator | + flavor_id = (known after apply) 2026-03-11 00:02:23.330460 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-11 00:02:23.330464 | orchestrator | + force_delete = false 2026-03-11 00:02:23.330483 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-11 00:02:23.330521 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.330525 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:23.330529 | orchestrator | + image_name = (known after apply) 2026-03-11 00:02:23.330533 | orchestrator | + key_pair = "testbed" 2026-03-11 00:02:23.330571 | orchestrator | + name = "testbed-node-4" 2026-03-11 00:02:23.330583 | orchestrator | + power_state = "active" 2026-03-11 00:02:23.330594 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.330598 | orchestrator | + security_groups = (known after apply) 2026-03-11 00:02:23.330609 | orchestrator | + stop_before_destroy = false 2026-03-11 00:02:23.330613 | orchestrator | + updated = (known after apply) 2026-03-11 00:02:23.330626 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-11 00:02:23.330667 | orchestrator | 2026-03-11 00:02:23.330678 | orchestrator | + block_device { 2026-03-11 00:02:23.330683 | orchestrator | + boot_index = 0 2026-03-11 00:02:23.330694 | orchestrator | + delete_on_termination = false 2026-03-11 00:02:23.330698 | orchestrator | + destination_type = "volume" 2026-03-11 00:02:23.330710 | orchestrator | + multiattach = false 2026-03-11 00:02:23.330714 | orchestrator | + source_type = "volume" 2026-03-11 00:02:23.330718 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:23.330748 | orchestrator | } 2026-03-11 00:02:23.330761 | orchestrator | 2026-03-11 00:02:23.330765 | orchestrator | + network { 2026-03-11 00:02:23.330777 | orchestrator | + access_network = false 2026-03-11 00:02:23.330789 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-11 00:02:23.330793 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-11 00:02:23.330796 | orchestrator | + mac = (known after apply) 2026-03-11 00:02:23.330834 | orchestrator | + name = (known after apply) 2026-03-11 00:02:23.330839 | orchestrator | + port = (known after apply) 2026-03-11 00:02:23.330843 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:23.330866 | orchestrator | } 2026-03-11 00:02:23.330928 | orchestrator | } 2026-03-11 00:02:23.330969 | orchestrator | 2026-03-11 00:02:23.330974 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-11 00:02:23.330978 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-11 00:02:23.330982 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-11 00:02:23.330986 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-11 00:02:23.331046 | orchestrator | + all_metadata = (known after apply) 2026-03-11 00:02:23.331052 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.331126 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:23.331131 | orchestrator | + config_drive = true 2026-03-11 00:02:23.331135 | orchestrator | + created = (known after apply) 2026-03-11 00:02:23.331172 | orchestrator | + flavor_id = (known after apply) 2026-03-11 00:02:23.331177 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-11 00:02:23.331180 | orchestrator | + force_delete = false 2026-03-11 00:02:23.331196 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-11 00:02:23.331208 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.331212 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:23.331216 | orchestrator | + image_name = (known after apply) 2026-03-11 00:02:23.331220 | orchestrator | + key_pair = "testbed" 2026-03-11 00:02:23.331224 | orchestrator | + name = "testbed-node-5" 2026-03-11 00:02:23.331236 | orchestrator | + power_state = "active" 2026-03-11 00:02:23.331240 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.331244 | orchestrator | + security_groups = (known after apply) 2026-03-11 00:02:23.331256 | orchestrator | + stop_before_destroy = false 2026-03-11 00:02:23.331261 | orchestrator | + updated = (known after apply) 2026-03-11 00:02:23.331264 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-11 00:02:23.331268 | orchestrator | 2026-03-11 00:02:23.331272 | orchestrator | + block_device { 2026-03-11 00:02:23.331284 | orchestrator | + boot_index = 0 2026-03-11 00:02:23.331288 | orchestrator | + delete_on_termination = false 2026-03-11 00:02:23.331292 | orchestrator | + destination_type = "volume" 2026-03-11 00:02:23.331295 | orchestrator | + multiattach = false 2026-03-11 00:02:23.331307 | orchestrator | + source_type = "volume" 2026-03-11 00:02:23.331311 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:23.331315 | orchestrator | } 2026-03-11 00:02:23.331319 | orchestrator | 2026-03-11 00:02:23.331330 | orchestrator | + network { 2026-03-11 00:02:23.331335 | orchestrator | + access_network = false 2026-03-11 00:02:23.331338 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-11 00:02:23.331342 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-11 00:02:23.331373 | orchestrator | + mac = (known after apply) 2026-03-11 00:02:23.331377 | orchestrator | + name = (known after apply) 2026-03-11 00:02:23.331381 | orchestrator | + port = (known after apply) 2026-03-11 00:02:23.331385 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:23.331407 | orchestrator | } 2026-03-11 00:02:23.331420 | orchestrator | } 2026-03-11 00:02:23.331431 | orchestrator | 2026-03-11 00:02:23.331435 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-11 00:02:23.331447 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-11 00:02:23.331451 | orchestrator | + fingerprint = (known after apply) 2026-03-11 00:02:23.331455 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.331474 | orchestrator | + name = "testbed" 2026-03-11 00:02:23.331479 | orchestrator | + private_key = (sensitive value) 2026-03-11 00:02:23.331483 | orchestrator | + public_key = (known after apply) 2026-03-11 00:02:23.331487 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.331490 | orchestrator | + user_id = (known after apply) 2026-03-11 00:02:23.331494 | orchestrator | } 2026-03-11 00:02:23.331530 | orchestrator | 2026-03-11 00:02:23.331542 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-11 00:02:23.331546 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:23.331591 | orchestrator | + device = (known after apply) 2026-03-11 00:02:23.331595 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.331599 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:23.331603 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.331607 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:23.331618 | orchestrator | } 2026-03-11 00:02:23.331622 | orchestrator | 2026-03-11 00:02:23.331626 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-11 00:02:23.331644 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:23.331664 | orchestrator | + device = (known after apply) 2026-03-11 00:02:23.331676 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.331710 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:23.331722 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.331726 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:23.331737 | orchestrator | } 2026-03-11 00:02:23.331749 | orchestrator | 2026-03-11 00:02:23.331753 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-11 00:02:23.331765 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:23.331769 | orchestrator | + device = (known after apply) 2026-03-11 00:02:23.331781 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.331785 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:23.331789 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.331823 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:23.331827 | orchestrator | } 2026-03-11 00:02:23.331831 | orchestrator | 2026-03-11 00:02:23.331835 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-11 00:02:23.331839 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:23.331842 | orchestrator | + device = (known after apply) 2026-03-11 00:02:23.331883 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.331908 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:23.331923 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.331927 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:23.331937 | orchestrator | } 2026-03-11 00:02:23.331942 | orchestrator | 2026-03-11 00:02:23.331953 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-11 00:02:23.331957 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:23.331970 | orchestrator | + device = (known after apply) 2026-03-11 00:02:23.331974 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.331986 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:23.332000 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.332004 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:23.332021 | orchestrator | } 2026-03-11 00:02:23.332025 | orchestrator | 2026-03-11 00:02:23.332084 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-11 00:02:23.332088 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:23.332092 | orchestrator | + device = (known after apply) 2026-03-11 00:02:23.332096 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.332143 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:23.332147 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.332158 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:23.332162 | orchestrator | } 2026-03-11 00:02:23.332234 | orchestrator | 2026-03-11 00:02:23.332239 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-11 00:02:23.332243 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:23.332246 | orchestrator | + device = (known after apply) 2026-03-11 00:02:23.332281 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.332285 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:23.332288 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.332296 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:23.332323 | orchestrator | } 2026-03-11 00:02:23.332362 | orchestrator | 2026-03-11 00:02:23.332367 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-11 00:02:23.332371 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:23.332378 | orchestrator | + device = (known after apply) 2026-03-11 00:02:23.332382 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.332386 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:23.332389 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.332393 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:23.332397 | orchestrator | } 2026-03-11 00:02:23.332401 | orchestrator | 2026-03-11 00:02:23.332404 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-11 00:02:23.332408 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:23.332412 | orchestrator | + device = (known after apply) 2026-03-11 00:02:23.332416 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.332420 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:23.332423 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.332427 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:23.332431 | orchestrator | } 2026-03-11 00:02:23.332435 | orchestrator | 2026-03-11 00:02:23.332439 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-11 00:02:23.332443 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-11 00:02:23.332447 | orchestrator | + fixed_ip = (known after apply) 2026-03-11 00:02:23.332451 | orchestrator | + floating_ip = (known after apply) 2026-03-11 00:02:23.332454 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.332458 | orchestrator | + port_id = (known after apply) 2026-03-11 00:02:23.332462 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.332466 | orchestrator | } 2026-03-11 00:02:23.332469 | orchestrator | 2026-03-11 00:02:23.332473 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-11 00:02:23.332477 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-11 00:02:23.332481 | orchestrator | + address = (known after apply) 2026-03-11 00:02:23.332485 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.332488 | orchestrator | + dns_domain = (known after apply) 2026-03-11 00:02:23.332492 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:23.332496 | orchestrator | + fixed_ip = (known after apply) 2026-03-11 00:02:23.332500 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.332504 | orchestrator | + pool = "public" 2026-03-11 00:02:23.332507 | orchestrator | + port_id = (known after apply) 2026-03-11 00:02:23.332511 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.332515 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:23.332519 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.332522 | orchestrator | } 2026-03-11 00:02:23.332588 | orchestrator | 2026-03-11 00:02:23.332593 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-11 00:02:23.332597 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-11 00:02:23.332601 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:23.332605 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.332609 | orchestrator | + availability_zone_hints = [ 2026-03-11 00:02:23.332612 | orchestrator | + "nova", 2026-03-11 00:02:23.332616 | orchestrator | ] 2026-03-11 00:02:23.332620 | orchestrator | + dns_domain = (known after apply) 2026-03-11 00:02:23.332624 | orchestrator | + external = (known after apply) 2026-03-11 00:02:23.332670 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.332674 | orchestrator | + mtu = (known after apply) 2026-03-11 00:02:23.332677 | orchestrator | + name = "net-testbed-management" 2026-03-11 00:02:23.332681 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:23.332690 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:23.332694 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.332698 | orchestrator | + shared = (known after apply) 2026-03-11 00:02:23.332702 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.332705 | orchestrator | + transparent_vlan = (known after apply) 2026-03-11 00:02:23.332709 | orchestrator | 2026-03-11 00:02:23.332713 | orchestrator | + segments (known after apply) 2026-03-11 00:02:23.332717 | orchestrator | } 2026-03-11 00:02:23.332721 | orchestrator | 2026-03-11 00:02:23.332724 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-11 00:02:23.332728 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-11 00:02:23.332732 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:23.332736 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-11 00:02:23.332740 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-11 00:02:23.332749 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.332753 | orchestrator | + device_id = (known after apply) 2026-03-11 00:02:23.332757 | orchestrator | + device_owner = (known after apply) 2026-03-11 00:02:23.332760 | orchestrator | + dns_assignment = (known after apply) 2026-03-11 00:02:23.332764 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:23.332768 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.332776 | orchestrator | + mac_address = (known after apply) 2026-03-11 00:02:23.332779 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:23.332783 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:23.332787 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:23.332791 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.332795 | orchestrator | + security_group_ids = (known after apply) 2026-03-11 00:02:23.332798 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.332802 | orchestrator | 2026-03-11 00:02:23.332806 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.332810 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-11 00:02:23.332814 | orchestrator | } 2026-03-11 00:02:23.332817 | orchestrator | 2026-03-11 00:02:23.332821 | orchestrator | + binding (known after apply) 2026-03-11 00:02:23.332825 | orchestrator | 2026-03-11 00:02:23.332829 | orchestrator | + fixed_ip { 2026-03-11 00:02:23.332833 | orchestrator | + ip_address = "192.168.16.5" 2026-03-11 00:02:23.332837 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:23.332841 | orchestrator | } 2026-03-11 00:02:23.332844 | orchestrator | } 2026-03-11 00:02:23.332848 | orchestrator | 2026-03-11 00:02:23.332852 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-11 00:02:23.332856 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-11 00:02:23.332860 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:23.332864 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-11 00:02:23.332867 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-11 00:02:23.332871 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.332875 | orchestrator | + device_id = (known after apply) 2026-03-11 00:02:23.332879 | orchestrator | + device_owner = (known after apply) 2026-03-11 00:02:23.332882 | orchestrator | + dns_assignment = (known after apply) 2026-03-11 00:02:23.332886 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:23.332890 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.332894 | orchestrator | + mac_address = (known after apply) 2026-03-11 00:02:23.332897 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:23.332901 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:23.332905 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:23.332909 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.332915 | orchestrator | + security_group_ids = (known after apply) 2026-03-11 00:02:23.332919 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.332923 | orchestrator | 2026-03-11 00:02:23.332927 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.332931 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-11 00:02:23.332934 | orchestrator | } 2026-03-11 00:02:23.332938 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.332942 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-11 00:02:23.332946 | orchestrator | } 2026-03-11 00:02:23.332950 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.332954 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-11 00:02:23.332958 | orchestrator | } 2026-03-11 00:02:23.332962 | orchestrator | 2026-03-11 00:02:23.332965 | orchestrator | + binding (known after apply) 2026-03-11 00:02:23.332969 | orchestrator | 2026-03-11 00:02:23.332973 | orchestrator | + fixed_ip { 2026-03-11 00:02:23.332977 | orchestrator | + ip_address = "192.168.16.10" 2026-03-11 00:02:23.332980 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:23.332984 | orchestrator | } 2026-03-11 00:02:23.332988 | orchestrator | } 2026-03-11 00:02:23.332992 | orchestrator | 2026-03-11 00:02:23.332996 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-11 00:02:23.333000 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-11 00:02:23.333003 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:23.333007 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-11 00:02:23.333011 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-11 00:02:23.333015 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.333019 | orchestrator | + device_id = (known after apply) 2026-03-11 00:02:23.333022 | orchestrator | + device_owner = (known after apply) 2026-03-11 00:02:23.333026 | orchestrator | + dns_assignment = (known after apply) 2026-03-11 00:02:23.333030 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:23.333034 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.333038 | orchestrator | + mac_address = (known after apply) 2026-03-11 00:02:23.333041 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:23.333045 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:23.333049 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:23.333053 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.333057 | orchestrator | + security_group_ids = (known after apply) 2026-03-11 00:02:23.333060 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.333064 | orchestrator | 2026-03-11 00:02:23.333068 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.333072 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-11 00:02:23.333076 | orchestrator | } 2026-03-11 00:02:23.333079 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.333083 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-11 00:02:23.333087 | orchestrator | } 2026-03-11 00:02:23.333091 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.333095 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-11 00:02:23.333098 | orchestrator | } 2026-03-11 00:02:23.333102 | orchestrator | 2026-03-11 00:02:23.333106 | orchestrator | + binding (known after apply) 2026-03-11 00:02:23.333110 | orchestrator | 2026-03-11 00:02:23.333114 | orchestrator | + fixed_ip { 2026-03-11 00:02:23.333117 | orchestrator | + ip_address = "192.168.16.11" 2026-03-11 00:02:23.333121 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:23.333125 | orchestrator | } 2026-03-11 00:02:23.333129 | orchestrator | } 2026-03-11 00:02:23.333132 | orchestrator | 2026-03-11 00:02:23.333136 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-11 00:02:23.333140 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-11 00:02:23.333144 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:23.333148 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-11 00:02:23.333152 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-11 00:02:23.333156 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.333162 | orchestrator | + device_id = (known after apply) 2026-03-11 00:02:23.333166 | orchestrator | + device_owner = (known after apply) 2026-03-11 00:02:23.333170 | orchestrator | + dns_assignment = (known after apply) 2026-03-11 00:02:23.333174 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:23.333180 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.333184 | orchestrator | + mac_address = (known after apply) 2026-03-11 00:02:23.333193 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:23.333197 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:23.333201 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:23.333205 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.333209 | orchestrator | + security_group_ids = (known after apply) 2026-03-11 00:02:23.333212 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.333216 | orchestrator | 2026-03-11 00:02:23.333220 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.333224 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-11 00:02:23.333228 | orchestrator | } 2026-03-11 00:02:23.333232 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.333235 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-11 00:02:23.333239 | orchestrator | } 2026-03-11 00:02:23.333243 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.333247 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-11 00:02:23.333251 | orchestrator | } 2026-03-11 00:02:23.333254 | orchestrator | 2026-03-11 00:02:23.333258 | orchestrator | + binding (known after apply) 2026-03-11 00:02:23.333262 | orchestrator | 2026-03-11 00:02:23.333266 | orchestrator | + fixed_ip { 2026-03-11 00:02:23.333270 | orchestrator | + ip_address = "192.168.16.12" 2026-03-11 00:02:23.333273 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:23.333277 | orchestrator | } 2026-03-11 00:02:23.333281 | orchestrator | } 2026-03-11 00:02:23.333285 | orchestrator | 2026-03-11 00:02:23.333289 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-11 00:02:23.333293 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-11 00:02:23.333296 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:23.333300 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-11 00:02:23.333304 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-11 00:02:23.333308 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.333312 | orchestrator | + device_id = (known after apply) 2026-03-11 00:02:23.333315 | orchestrator | + device_owner = (known after apply) 2026-03-11 00:02:23.333319 | orchestrator | + dns_assignment = (known after apply) 2026-03-11 00:02:23.333323 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:23.333327 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.333330 | orchestrator | + mac_address = (known after apply) 2026-03-11 00:02:23.333334 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:23.333338 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:23.333342 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:23.333345 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.333349 | orchestrator | + security_group_ids = (known after apply) 2026-03-11 00:02:23.333353 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.333357 | orchestrator | 2026-03-11 00:02:23.333361 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.333364 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-11 00:02:23.333368 | orchestrator | } 2026-03-11 00:02:23.333372 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.333376 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-11 00:02:23.333380 | orchestrator | } 2026-03-11 00:02:23.333383 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.333387 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-11 00:02:23.333391 | orchestrator | } 2026-03-11 00:02:23.333395 | orchestrator | 2026-03-11 00:02:23.333401 | orchestrator | + binding (known after apply) 2026-03-11 00:02:23.333405 | orchestrator | 2026-03-11 00:02:23.333409 | orchestrator | + fixed_ip { 2026-03-11 00:02:23.333413 | orchestrator | + ip_address = "192.168.16.13" 2026-03-11 00:02:23.333417 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:23.333420 | orchestrator | } 2026-03-11 00:02:23.333424 | orchestrator | } 2026-03-11 00:02:23.333428 | orchestrator | 2026-03-11 00:02:23.333432 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-11 00:02:23.333436 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-11 00:02:23.333439 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:23.333443 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-11 00:02:23.333447 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-11 00:02:23.333451 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.333455 | orchestrator | + device_id = (known after apply) 2026-03-11 00:02:23.333458 | orchestrator | + device_owner = (known after apply) 2026-03-11 00:02:23.333462 | orchestrator | + dns_assignment = (known after apply) 2026-03-11 00:02:23.333466 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:23.333470 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.333473 | orchestrator | + mac_address = (known after apply) 2026-03-11 00:02:23.333477 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:23.333481 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:23.333485 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:23.333488 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.333492 | orchestrator | + security_group_ids = (known after apply) 2026-03-11 00:02:23.333496 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.333501 | orchestrator | 2026-03-11 00:02:23.333505 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.333508 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-11 00:02:23.333512 | orchestrator | } 2026-03-11 00:02:23.333516 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.333520 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-11 00:02:23.333524 | orchestrator | } 2026-03-11 00:02:23.333527 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.333531 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-11 00:02:23.333535 | orchestrator | } 2026-03-11 00:02:23.333539 | orchestrator | 2026-03-11 00:02:23.333543 | orchestrator | + binding (known after apply) 2026-03-11 00:02:23.333547 | orchestrator | 2026-03-11 00:02:23.333550 | orchestrator | + fixed_ip { 2026-03-11 00:02:23.333554 | orchestrator | + ip_address = "192.168.16.14" 2026-03-11 00:02:23.333558 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:23.333562 | orchestrator | } 2026-03-11 00:02:23.333566 | orchestrator | } 2026-03-11 00:02:23.333570 | orchestrator | 2026-03-11 00:02:23.333573 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-11 00:02:23.333577 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-11 00:02:23.333581 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:23.333585 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-11 00:02:23.333589 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-11 00:02:23.333593 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.333596 | orchestrator | + device_id = (known after apply) 2026-03-11 00:02:23.333600 | orchestrator | + device_owner = (known after apply) 2026-03-11 00:02:23.333604 | orchestrator | + dns_assignment = (known after apply) 2026-03-11 00:02:23.333611 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:23.333615 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.333619 | orchestrator | + mac_address = (known after apply) 2026-03-11 00:02:23.333623 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:23.333653 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:23.333658 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:23.333666 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.333669 | orchestrator | + security_group_ids = (known after apply) 2026-03-11 00:02:23.333673 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.333677 | orchestrator | 2026-03-11 00:02:23.333681 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.333684 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-11 00:02:23.333688 | orchestrator | } 2026-03-11 00:02:23.333692 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.333696 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-11 00:02:23.333700 | orchestrator | } 2026-03-11 00:02:23.333703 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:23.333707 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-11 00:02:23.333711 | orchestrator | } 2026-03-11 00:02:23.333715 | orchestrator | 2026-03-11 00:02:23.333721 | orchestrator | + binding (known after apply) 2026-03-11 00:02:23.333725 | orchestrator | 2026-03-11 00:02:23.333729 | orchestrator | + fixed_ip { 2026-03-11 00:02:23.333733 | orchestrator | + ip_address = "192.168.16.15" 2026-03-11 00:02:23.333737 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:23.333741 | orchestrator | } 2026-03-11 00:02:23.333744 | orchestrator | } 2026-03-11 00:02:23.333748 | orchestrator | 2026-03-11 00:02:23.333752 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-11 00:02:23.333756 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-11 00:02:23.333760 | orchestrator | + force_destroy = false 2026-03-11 00:02:23.333763 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.333767 | orchestrator | + port_id = (known after apply) 2026-03-11 00:02:23.333771 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.333775 | orchestrator | + router_id = (known after apply) 2026-03-11 00:02:23.333779 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:23.333782 | orchestrator | } 2026-03-11 00:02:23.333786 | orchestrator | 2026-03-11 00:02:23.333790 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-11 00:02:23.333794 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-11 00:02:23.333797 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:23.333801 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.333805 | orchestrator | + availability_zone_hints = [ 2026-03-11 00:02:23.333809 | orchestrator | + "nova", 2026-03-11 00:02:23.333813 | orchestrator | ] 2026-03-11 00:02:23.333816 | orchestrator | + distributed = (known after apply) 2026-03-11 00:02:23.333820 | orchestrator | + enable_snat = (known after apply) 2026-03-11 00:02:23.333824 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-11 00:02:23.333828 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-11 00:02:23.333831 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.333835 | orchestrator | + name = "testbed" 2026-03-11 00:02:23.333839 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.333843 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.333847 | orchestrator | 2026-03-11 00:02:23.333850 | orchestrator | + external_fixed_ip (known after apply) 2026-03-11 00:02:23.333854 | orchestrator | } 2026-03-11 00:02:23.333858 | orchestrator | 2026-03-11 00:02:23.333862 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-11 00:02:23.333867 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-11 00:02:23.333870 | orchestrator | + description = "ssh" 2026-03-11 00:02:23.333874 | orchestrator | + direction = "ingress" 2026-03-11 00:02:23.333878 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:23.333882 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.333886 | orchestrator | + port_range_max = 22 2026-03-11 00:02:23.333889 | orchestrator | + port_range_min = 22 2026-03-11 00:02:23.333893 | orchestrator | + protocol = "tcp" 2026-03-11 00:02:23.333897 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.333907 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:23.333910 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:23.333914 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-11 00:02:23.333918 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:23.333922 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.333926 | orchestrator | } 2026-03-11 00:02:23.333929 | orchestrator | 2026-03-11 00:02:23.333933 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-11 00:02:23.333937 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-11 00:02:23.333941 | orchestrator | + description = "wireguard" 2026-03-11 00:02:23.333945 | orchestrator | + direction = "ingress" 2026-03-11 00:02:23.333948 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:23.333952 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.333956 | orchestrator | + port_range_max = 51820 2026-03-11 00:02:23.333960 | orchestrator | + port_range_min = 51820 2026-03-11 00:02:23.333964 | orchestrator | + protocol = "udp" 2026-03-11 00:02:23.333967 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.333971 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:23.333975 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:23.333979 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-11 00:02:23.333982 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:23.333986 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.333990 | orchestrator | } 2026-03-11 00:02:23.333994 | orchestrator | 2026-03-11 00:02:23.333998 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-11 00:02:23.334001 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-11 00:02:23.334005 | orchestrator | + direction = "ingress" 2026-03-11 00:02:23.334009 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:23.334037 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.334042 | orchestrator | + protocol = "tcp" 2026-03-11 00:02:23.334046 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.334053 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:23.334058 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:23.334061 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-11 00:02:23.334065 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:23.334069 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.334073 | orchestrator | } 2026-03-11 00:02:23.334077 | orchestrator | 2026-03-11 00:02:23.334081 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-11 00:02:23.334084 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-11 00:02:23.334088 | orchestrator | + direction = "ingress" 2026-03-11 00:02:23.334092 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:23.334096 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.334100 | orchestrator | + protocol = "udp" 2026-03-11 00:02:23.334103 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.334107 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:23.334111 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:23.334115 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-11 00:02:23.334119 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:23.334147 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.334151 | orchestrator | } 2026-03-11 00:02:23.334155 | orchestrator | 2026-03-11 00:02:23.334159 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-11 00:02:23.334167 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-11 00:02:23.334171 | orchestrator | + direction = "ingress" 2026-03-11 00:02:23.334175 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:23.334178 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.334182 | orchestrator | + protocol = "icmp" 2026-03-11 00:02:23.334186 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.334190 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:23.334194 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:23.334197 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-11 00:02:23.334201 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:23.334205 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.334209 | orchestrator | } 2026-03-11 00:02:23.334213 | orchestrator | 2026-03-11 00:02:23.334217 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-11 00:02:23.334220 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-11 00:02:23.334224 | orchestrator | + direction = "ingress" 2026-03-11 00:02:23.334228 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:23.334232 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.334236 | orchestrator | + protocol = "tcp" 2026-03-11 00:02:23.334240 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.334243 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:23.334250 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:23.334254 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-11 00:02:23.334258 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:23.334262 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.334266 | orchestrator | } 2026-03-11 00:02:23.334269 | orchestrator | 2026-03-11 00:02:23.334273 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-11 00:02:23.334277 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-11 00:02:23.334281 | orchestrator | + direction = "ingress" 2026-03-11 00:02:23.334285 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:23.334288 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.334292 | orchestrator | + protocol = "udp" 2026-03-11 00:02:23.334296 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.334300 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:23.334304 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:23.334308 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-11 00:02:23.334311 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:23.334315 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.334319 | orchestrator | } 2026-03-11 00:02:23.334323 | orchestrator | 2026-03-11 00:02:23.334327 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-11 00:02:23.334330 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-11 00:02:23.334334 | orchestrator | + direction = "ingress" 2026-03-11 00:02:23.334340 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:23.334344 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.334348 | orchestrator | + protocol = "icmp" 2026-03-11 00:02:23.334352 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.334356 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:23.334373 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:23.334378 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-11 00:02:23.334381 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:23.334385 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.334392 | orchestrator | } 2026-03-11 00:02:23.334396 | orchestrator | 2026-03-11 00:02:23.334400 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-11 00:02:23.334404 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-11 00:02:23.334407 | orchestrator | + description = "vrrp" 2026-03-11 00:02:23.334411 | orchestrator | + direction = "ingress" 2026-03-11 00:02:23.334415 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:23.334419 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.334423 | orchestrator | + protocol = "112" 2026-03-11 00:02:23.334426 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.334433 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:23.334437 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:23.334441 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-11 00:02:23.334444 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:23.334448 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.334452 | orchestrator | } 2026-03-11 00:02:23.334456 | orchestrator | 2026-03-11 00:02:23.334460 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-11 00:02:23.334464 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-11 00:02:23.334467 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.334471 | orchestrator | + description = "management security group" 2026-03-11 00:02:23.334475 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.334479 | orchestrator | + name = "testbed-management" 2026-03-11 00:02:23.334482 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.334486 | orchestrator | + stateful = (known after apply) 2026-03-11 00:02:23.334490 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.334494 | orchestrator | } 2026-03-11 00:02:23.334497 | orchestrator | 2026-03-11 00:02:23.334501 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-11 00:02:23.334505 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-11 00:02:23.334509 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.334513 | orchestrator | + description = "node security group" 2026-03-11 00:02:23.334516 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.334520 | orchestrator | + name = "testbed-node" 2026-03-11 00:02:23.334524 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.334528 | orchestrator | + stateful = (known after apply) 2026-03-11 00:02:23.334532 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.334535 | orchestrator | } 2026-03-11 00:02:23.334539 | orchestrator | 2026-03-11 00:02:23.334543 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-11 00:02:23.334547 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-11 00:02:23.334551 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:23.334554 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-11 00:02:23.334558 | orchestrator | + dns_nameservers = [ 2026-03-11 00:02:23.334562 | orchestrator | + "8.8.8.8", 2026-03-11 00:02:23.334566 | orchestrator | + "9.9.9.9", 2026-03-11 00:02:23.334570 | orchestrator | ] 2026-03-11 00:02:23.334574 | orchestrator | + enable_dhcp = true 2026-03-11 00:02:23.334577 | orchestrator | + gateway_ip = (known after apply) 2026-03-11 00:02:23.334581 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.334585 | orchestrator | + ip_version = 4 2026-03-11 00:02:23.334589 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-11 00:02:23.334593 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-11 00:02:23.334597 | orchestrator | + name = "subnet-testbed-management" 2026-03-11 00:02:23.334600 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:23.334604 | orchestrator | + no_gateway = false 2026-03-11 00:02:23.334608 | orchestrator | + region = (known after apply) 2026-03-11 00:02:23.334612 | orchestrator | + service_types = (known after apply) 2026-03-11 00:02:23.334618 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:23.334622 | orchestrator | 2026-03-11 00:02:23.334626 | orchestrator | + allocation_pool { 2026-03-11 00:02:23.334639 | orchestrator | + end = "192.168.31.250" 2026-03-11 00:02:23.334643 | orchestrator | + start = "192.168.31.200" 2026-03-11 00:02:23.334646 | orchestrator | } 2026-03-11 00:02:23.334650 | orchestrator | } 2026-03-11 00:02:23.334654 | orchestrator | 2026-03-11 00:02:23.334658 | orchestrator | # terraform_data.image will be created 2026-03-11 00:02:23.334662 | orchestrator | + resource "terraform_data" "image" { 2026-03-11 00:02:23.334665 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.334669 | orchestrator | + input = "Ubuntu 24.04" 2026-03-11 00:02:23.334673 | orchestrator | + output = (known after apply) 2026-03-11 00:02:23.334677 | orchestrator | } 2026-03-11 00:02:23.334680 | orchestrator | 2026-03-11 00:02:23.334684 | orchestrator | # terraform_data.image_node will be created 2026-03-11 00:02:23.334688 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-11 00:02:23.334692 | orchestrator | + id = (known after apply) 2026-03-11 00:02:23.334695 | orchestrator | + input = "Ubuntu 24.04" 2026-03-11 00:02:23.334699 | orchestrator | + output = (known after apply) 2026-03-11 00:02:23.334703 | orchestrator | } 2026-03-11 00:02:23.334707 | orchestrator | 2026-03-11 00:02:23.334711 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-11 00:02:23.334714 | orchestrator | 2026-03-11 00:02:23.334718 | orchestrator | Changes to Outputs: 2026-03-11 00:02:23.334722 | orchestrator | + manager_address = (sensitive value) 2026-03-11 00:02:23.334726 | orchestrator | + private_key = (sensitive value) 2026-03-11 00:02:23.604069 | orchestrator | terraform_data.image_node: Creating... 2026-03-11 00:02:23.604126 | orchestrator | terraform_data.image: Creating... 2026-03-11 00:02:23.604134 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=8b73db6e-eb51-49ed-6c22-5dc468f3a2cb] 2026-03-11 00:02:23.610086 | orchestrator | terraform_data.image: Creation complete after 0s [id=08ee2d06-c3a6-7158-22d0-c55a8a575113] 2026-03-11 00:02:23.616005 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-11 00:02:23.619201 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-11 00:02:23.630073 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-11 00:02:23.630122 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-11 00:02:23.630127 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-11 00:02:23.630813 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-11 00:02:23.642034 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-11 00:02:23.648911 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-11 00:02:23.656737 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-11 00:02:23.656920 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-11 00:02:24.099525 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-11 00:02:24.105277 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-11 00:02:24.107243 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-11 00:02:24.110896 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-11 00:02:24.136513 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-11 00:02:24.146835 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-11 00:02:25.180367 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=05077bac-1559-47ee-abb1-e088067c4c35] 2026-03-11 00:02:25.374305 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-11 00:02:27.288684 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=747dd4bc-1e4a-4053-bdf0-887e0b92b80b] 2026-03-11 00:02:27.298090 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-11 00:02:27.314999 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=cd4ac081-6fbb-4e27-9e74-8104c0078ac5] 2026-03-11 00:02:27.322479 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-11 00:02:27.327869 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=309d1afacda45c13874c308625697ebb2dbf15ca] 2026-03-11 00:02:27.330918 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=37fe87c5-ca63-4522-b75b-0d9e996155b4] 2026-03-11 00:02:27.343330 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=3907c798-8bb0-4366-8422-7f195107ce20] 2026-03-11 00:02:27.350075 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=1656cb8a-d6e3-4504-aba1-0af808046f0d] 2026-03-11 00:02:27.355185 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-11 00:02:27.356472 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-11 00:02:27.357759 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-11 00:02:27.368387 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-11 00:02:27.386222 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=ac11d5e4-d53a-4ea4-b7b7-81bfc32957e4] 2026-03-11 00:02:27.396382 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-11 00:02:27.418155 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=7c68b4db-9517-4776-878e-5cc78b8cffbb] 2026-03-11 00:02:27.424809 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-11 00:02:27.425227 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=bb163787-5642-41ea-bb50-14394c4239c7] 2026-03-11 00:02:27.430702 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=eecf890869038acfc99261074b218c52db3541a6] 2026-03-11 00:02:27.445015 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-11 00:02:27.448131 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=ef09bb17-59a8-4317-bed7-0146c94a1062] 2026-03-11 00:02:28.557445 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=6f78cb89-aa86-49d2-a847-247e2fff172d] 2026-03-11 00:02:29.405435 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=4188fec2-e659-4fb3-9a03-3c99ce39f97e] 2026-03-11 00:02:29.413960 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-11 00:02:30.693277 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=00967594-40dd-4a79-bd3f-9f82494451f1] 2026-03-11 00:02:30.793513 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=7ba47a05-150e-4018-97cd-15f15bf57c78] 2026-03-11 00:02:30.802768 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb] 2026-03-11 00:02:30.823856 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=29a3254a-e175-4d08-87e3-0a6181614d24] 2026-03-11 00:02:30.847826 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=fbed40d1-1e79-4316-99a8-e618a0da2df7] 2026-03-11 00:02:30.858522 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=317f502c-791e-4152-8dc2-509ac4c350a5] 2026-03-11 00:02:33.879560 | orchestrator | openstack_networking_router_v2.router: Creation complete after 5s [id=d8bb9c6c-7077-4e7d-a4a7-edd3d213cb29] 2026-03-11 00:02:33.888408 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-11 00:02:33.889041 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-11 00:02:33.894746 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-11 00:02:34.099284 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=9cf6e546-c074-41c5-af7b-7cbe6945184f] 2026-03-11 00:02:34.117233 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-11 00:02:34.118058 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-11 00:02:34.120901 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-11 00:02:34.127288 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-11 00:02:34.127613 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-11 00:02:34.129089 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-11 00:02:34.132357 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-11 00:02:34.132587 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-11 00:02:34.206738 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=5403663d-728e-4415-b22d-f7dbedb0a0e1] 2026-03-11 00:02:34.219079 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-11 00:02:34.839059 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=23a1ca55-ce9b-4054-93cc-32009e6f66a5] 2026-03-11 00:02:34.842683 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=8c13bbad-09b2-454b-a44c-2dc629133d86] 2026-03-11 00:02:34.847572 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-11 00:02:34.847626 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-11 00:02:34.950925 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=c2835cb3-c907-485f-b8ff-c7cfe1c4f97f] 2026-03-11 00:02:34.960413 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-11 00:02:35.112854 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=3c17a170-a29d-4b98-b26e-54c796f8dad4] 2026-03-11 00:02:35.114808 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=c22f01c5-6a64-4c75-9231-63713047177b] 2026-03-11 00:02:35.120265 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-11 00:02:35.123281 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-11 00:02:35.164061 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=cb138fdb-6f92-4bff-9047-660a1a96eb54] 2026-03-11 00:02:35.169512 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-11 00:02:35.314554 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=a4208480-8dc0-4b6b-85f6-e01a4c77f048] 2026-03-11 00:02:35.321855 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-11 00:02:35.364246 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=aa9c9a07-1981-474d-86da-2a4bf2c365c1] 2026-03-11 00:02:35.659598 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=cefa602c-64a0-470c-8e05-44ebc0a52e7c] 2026-03-11 00:02:35.681615 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=864e9e75-40d6-4339-8205-ac54af0f700c] 2026-03-11 00:02:35.800376 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=949dc722-c450-4e80-aa2a-b6857a95af45] 2026-03-11 00:02:35.853741 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=8de3b79a-e50f-4cde-968f-578d1a77a98e] 2026-03-11 00:02:35.970090 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=37ddc5de-cb89-4bc3-ad49-0f940d0275b2] 2026-03-11 00:02:36.000376 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=a8beb9b8-6d58-43a2-a186-b6a1851bed27] 2026-03-11 00:02:36.110922 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=db8e75d8-7617-4556-9a1c-9bc0068b71dc] 2026-03-11 00:02:36.215957 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=0df8ee52-f19b-4e80-a3ad-c2745c638aa9] 2026-03-11 00:02:36.917201 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=37c5b5ba-e396-4abd-90c5-bc3b0a68ee3e] 2026-03-11 00:02:36.933037 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-11 00:02:36.946061 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-11 00:02:36.947622 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-11 00:02:36.955883 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-11 00:02:36.957707 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-11 00:02:36.971787 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-11 00:02:36.977441 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-11 00:02:39.562245 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=f82949aa-e883-41c3-9302-0b5e7ee3113d] 2026-03-11 00:02:39.569747 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-11 00:02:39.576360 | orchestrator | local_file.inventory: Creating... 2026-03-11 00:02:39.585062 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-11 00:02:39.587739 | orchestrator | local_file.inventory: Creation complete after 0s [id=ab7f881bd7acbdabaaa55a66e04837262421c261] 2026-03-11 00:02:39.590480 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=8f9078b46145da10116bce329d572ba0c026c883] 2026-03-11 00:02:40.456504 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=f82949aa-e883-41c3-9302-0b5e7ee3113d] 2026-03-11 00:02:46.953370 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-11 00:02:46.953494 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-11 00:02:46.957944 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-11 00:02:46.960411 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-11 00:02:46.974821 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-11 00:02:46.978215 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-11 00:02:56.962489 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-11 00:02:56.962604 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-11 00:02:56.962632 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-11 00:02:56.962645 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-11 00:02:56.975938 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-11 00:02:56.978338 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-11 00:03:06.971547 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-11 00:03:06.971705 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-11 00:03:06.971735 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-11 00:03:06.971757 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-11 00:03:06.976238 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-11 00:03:06.978439 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-11 00:03:16.973259 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-11 00:03:16.973386 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-11 00:03:16.973400 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-11 00:03:16.973410 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-11 00:03:16.976648 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-03-11 00:03:16.978985 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-11 00:03:18.357053 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=9366e362-e4ac-4e90-9e8f-ba6e89e89b99] 2026-03-11 00:03:26.974766 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-03-11 00:03:26.974866 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-03-11 00:03:26.974878 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-03-11 00:03:26.977050 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-03-11 00:03:26.979431 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-03-11 00:03:27.992716 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 51s [id=9966c8bf-4107-4a0d-8c48-2f3767450ad5] 2026-03-11 00:03:28.490038 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 51s [id=23b2faca-e3b5-4a70-8a00-90ee5e97abb0] 2026-03-11 00:03:28.490126 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 51s [id=fec779a3-b410-43bd-8ab0-dc9bf7554095] 2026-03-11 00:03:28.698037 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 52s [id=d1335862-06c0-4342-a1bc-f22dc3563e0d] 2026-03-11 00:03:36.984642 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m0s elapsed] 2026-03-11 00:03:38.016523 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 1m1s [id=a232c8eb-d1ad-4069-9cf4-18f679ecf764] 2026-03-11 00:03:38.042245 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-11 00:03:38.046120 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4147857301561273569] 2026-03-11 00:03:38.050261 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-11 00:03:38.050730 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-11 00:03:38.050790 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-11 00:03:38.050840 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-11 00:03:38.055814 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-11 00:03:38.080795 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-11 00:03:38.092216 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-11 00:03:38.092740 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-11 00:03:38.135415 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-11 00:03:38.151973 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-11 00:03:41.469477 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=fec779a3-b410-43bd-8ab0-dc9bf7554095/7c68b4db-9517-4776-878e-5cc78b8cffbb] 2026-03-11 00:03:41.503550 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=d1335862-06c0-4342-a1bc-f22dc3563e0d/37fe87c5-ca63-4522-b75b-0d9e996155b4] 2026-03-11 00:03:41.551004 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=9966c8bf-4107-4a0d-8c48-2f3767450ad5/ef09bb17-59a8-4317-bed7-0146c94a1062] 2026-03-11 00:03:41.601380 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=9966c8bf-4107-4a0d-8c48-2f3767450ad5/ac11d5e4-d53a-4ea4-b7b7-81bfc32957e4] 2026-03-11 00:03:47.589959 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=fec779a3-b410-43bd-8ab0-dc9bf7554095/3907c798-8bb0-4366-8422-7f195107ce20] 2026-03-11 00:03:47.640501 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=d1335862-06c0-4342-a1bc-f22dc3563e0d/1656cb8a-d6e3-4504-aba1-0af808046f0d] 2026-03-11 00:03:47.665460 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=d1335862-06c0-4342-a1bc-f22dc3563e0d/747dd4bc-1e4a-4053-bdf0-887e0b92b80b] 2026-03-11 00:03:47.672063 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=9966c8bf-4107-4a0d-8c48-2f3767450ad5/bb163787-5642-41ea-bb50-14394c4239c7] 2026-03-11 00:03:47.676023 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=fec779a3-b410-43bd-8ab0-dc9bf7554095/cd4ac081-6fbb-4e27-9e74-8104c0078ac5] 2026-03-11 00:03:48.153196 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-11 00:03:58.153464 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-11 00:03:58.567797 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=c4220c76-2e44-4608-adf2-31823e370b19] 2026-03-11 00:03:58.590636 | orchestrator | 2026-03-11 00:03:58.590744 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-11 00:03:58.590758 | orchestrator | 2026-03-11 00:03:58.590766 | orchestrator | Outputs: 2026-03-11 00:03:58.590771 | orchestrator | 2026-03-11 00:03:58.590784 | orchestrator | manager_address = 2026-03-11 00:03:58.590789 | orchestrator | private_key = 2026-03-11 00:03:58.948157 | orchestrator | ok: Runtime: 0:01:39.791665 2026-03-11 00:03:58.979901 | 2026-03-11 00:03:58.980036 | TASK [Fetch manager address] 2026-03-11 00:03:59.475476 | orchestrator | ok 2026-03-11 00:03:59.485322 | 2026-03-11 00:03:59.485451 | TASK [Set manager_host address] 2026-03-11 00:03:59.564095 | orchestrator | ok 2026-03-11 00:03:59.572940 | 2026-03-11 00:03:59.573064 | LOOP [Update ansible collections] 2026-03-11 00:04:00.724620 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-11 00:04:00.725048 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-11 00:04:00.725114 | orchestrator | Starting galaxy collection install process 2026-03-11 00:04:00.725158 | orchestrator | Process install dependency map 2026-03-11 00:04:00.725197 | orchestrator | Starting collection install process 2026-03-11 00:04:00.725233 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-03-11 00:04:00.725274 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-03-11 00:04:00.725327 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-11 00:04:00.725411 | orchestrator | ok: Item: commons Runtime: 0:00:00.781287 2026-03-11 00:04:01.898916 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-11 00:04:01.899082 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-11 00:04:01.899130 | orchestrator | Starting galaxy collection install process 2026-03-11 00:04:01.899168 | orchestrator | Process install dependency map 2026-03-11 00:04:01.899202 | orchestrator | Starting collection install process 2026-03-11 00:04:01.899234 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-03-11 00:04:01.899357 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-03-11 00:04:01.899392 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-11 00:04:01.899446 | orchestrator | ok: Item: services Runtime: 0:00:00.780991 2026-03-11 00:04:01.915726 | 2026-03-11 00:04:01.915884 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-11 00:04:12.541574 | orchestrator | ok 2026-03-11 00:04:12.553323 | 2026-03-11 00:04:12.553462 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-11 00:05:12.600469 | orchestrator | ok 2026-03-11 00:05:12.610220 | 2026-03-11 00:05:12.610494 | TASK [Fetch manager ssh hostkey] 2026-03-11 00:05:14.200817 | orchestrator | Output suppressed because no_log was given 2026-03-11 00:05:14.216944 | 2026-03-11 00:05:14.217138 | TASK [Get ssh keypair from terraform environment] 2026-03-11 00:05:14.755998 | orchestrator | ok: Runtime: 0:00:00.005136 2026-03-11 00:05:14.772614 | 2026-03-11 00:05:14.772853 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-11 00:05:14.824647 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-11 00:05:14.835948 | 2026-03-11 00:05:14.836098 | TASK [Run manager part 0] 2026-03-11 00:05:15.779623 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-11 00:05:15.867597 | orchestrator | 2026-03-11 00:05:15.867835 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-11 00:05:15.867853 | orchestrator | 2026-03-11 00:05:15.867872 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-11 00:05:17.736461 | orchestrator | ok: [testbed-manager] 2026-03-11 00:05:17.736515 | orchestrator | 2026-03-11 00:05:17.736537 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-11 00:05:17.736546 | orchestrator | 2026-03-11 00:05:17.736555 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:05:19.712269 | orchestrator | ok: [testbed-manager] 2026-03-11 00:05:19.712363 | orchestrator | 2026-03-11 00:05:19.712371 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-11 00:05:20.371158 | orchestrator | ok: [testbed-manager] 2026-03-11 00:05:20.371226 | orchestrator | 2026-03-11 00:05:20.371238 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-11 00:05:20.423933 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:05:20.424004 | orchestrator | 2026-03-11 00:05:20.424020 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-11 00:05:20.455929 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:05:20.455997 | orchestrator | 2026-03-11 00:05:20.456008 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-11 00:05:20.507171 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:05:20.507239 | orchestrator | 2026-03-11 00:05:20.507246 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-11 00:05:20.546201 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:05:20.546262 | orchestrator | 2026-03-11 00:05:20.546269 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-11 00:05:20.582810 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:05:20.582888 | orchestrator | 2026-03-11 00:05:20.582900 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-11 00:05:20.620718 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:05:20.620827 | orchestrator | 2026-03-11 00:05:20.620842 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-11 00:05:20.651152 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:05:20.651224 | orchestrator | 2026-03-11 00:05:20.651237 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-11 00:05:21.389542 | orchestrator | changed: [testbed-manager] 2026-03-11 00:05:21.389616 | orchestrator | 2026-03-11 00:05:21.389626 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-11 00:08:11.985667 | orchestrator | changed: [testbed-manager] 2026-03-11 00:08:11.985805 | orchestrator | 2026-03-11 00:08:11.985824 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-11 00:09:31.537490 | orchestrator | changed: [testbed-manager] 2026-03-11 00:09:31.537586 | orchestrator | 2026-03-11 00:09:31.537602 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-11 00:09:53.856362 | orchestrator | changed: [testbed-manager] 2026-03-11 00:09:53.856407 | orchestrator | 2026-03-11 00:09:53.856416 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-11 00:10:02.599393 | orchestrator | changed: [testbed-manager] 2026-03-11 00:10:02.599444 | orchestrator | 2026-03-11 00:10:02.599453 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-11 00:10:02.652952 | orchestrator | ok: [testbed-manager] 2026-03-11 00:10:02.652996 | orchestrator | 2026-03-11 00:10:02.653006 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-11 00:10:03.485216 | orchestrator | ok: [testbed-manager] 2026-03-11 00:10:03.485314 | orchestrator | 2026-03-11 00:10:03.485333 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-11 00:10:04.227996 | orchestrator | changed: [testbed-manager] 2026-03-11 00:10:04.228041 | orchestrator | 2026-03-11 00:10:04.228050 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-11 00:10:10.358001 | orchestrator | changed: [testbed-manager] 2026-03-11 00:10:10.358153 | orchestrator | 2026-03-11 00:10:10.358202 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-11 00:10:16.125872 | orchestrator | changed: [testbed-manager] 2026-03-11 00:10:16.125954 | orchestrator | 2026-03-11 00:10:16.125969 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-11 00:10:18.677032 | orchestrator | changed: [testbed-manager] 2026-03-11 00:10:18.677149 | orchestrator | 2026-03-11 00:10:18.677176 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-11 00:10:20.391971 | orchestrator | changed: [testbed-manager] 2026-03-11 00:10:20.392027 | orchestrator | 2026-03-11 00:10:20.392037 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-11 00:10:21.507361 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-11 00:10:21.507418 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-11 00:10:21.507429 | orchestrator | 2026-03-11 00:10:21.507438 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-11 00:10:21.549203 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-11 00:10:21.549284 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-11 00:10:21.549298 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-11 00:10:21.549311 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-11 00:10:24.807927 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-11 00:10:24.808065 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-11 00:10:24.808084 | orchestrator | 2026-03-11 00:10:24.808099 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-11 00:10:25.350929 | orchestrator | changed: [testbed-manager] 2026-03-11 00:10:25.350975 | orchestrator | 2026-03-11 00:10:25.350984 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-11 00:13:45.872047 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-11 00:13:45.872134 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-11 00:13:45.872149 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-11 00:13:45.872159 | orchestrator | 2026-03-11 00:13:45.872170 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-11 00:13:48.202294 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-11 00:13:48.202329 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-11 00:13:48.202334 | orchestrator | 2026-03-11 00:13:48.202339 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-11 00:13:48.202344 | orchestrator | 2026-03-11 00:13:48.202348 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:13:49.543575 | orchestrator | ok: [testbed-manager] 2026-03-11 00:13:49.543611 | orchestrator | 2026-03-11 00:13:49.543618 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-11 00:13:49.597971 | orchestrator | ok: [testbed-manager] 2026-03-11 00:13:49.598011 | orchestrator | 2026-03-11 00:13:49.598054 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-11 00:13:49.667189 | orchestrator | ok: [testbed-manager] 2026-03-11 00:13:49.667228 | orchestrator | 2026-03-11 00:13:49.667236 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-11 00:13:50.417477 | orchestrator | changed: [testbed-manager] 2026-03-11 00:13:50.417576 | orchestrator | 2026-03-11 00:13:50.417594 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-11 00:13:51.032517 | orchestrator | changed: [testbed-manager] 2026-03-11 00:13:51.032614 | orchestrator | 2026-03-11 00:13:51.032631 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-11 00:13:52.297181 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-11 00:13:52.297236 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-11 00:13:52.297246 | orchestrator | 2026-03-11 00:13:52.297263 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-11 00:13:53.612932 | orchestrator | changed: [testbed-manager] 2026-03-11 00:13:53.613065 | orchestrator | 2026-03-11 00:13:53.613091 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-11 00:13:55.378527 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-11 00:13:55.378621 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-11 00:13:55.378635 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-11 00:13:55.378647 | orchestrator | 2026-03-11 00:13:55.378660 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-11 00:13:55.437285 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:13:55.437362 | orchestrator | 2026-03-11 00:13:55.437375 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-11 00:13:55.516800 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:13:55.516857 | orchestrator | 2026-03-11 00:13:55.516863 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-11 00:13:56.072384 | orchestrator | changed: [testbed-manager] 2026-03-11 00:13:56.072505 | orchestrator | 2026-03-11 00:13:56.072521 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-11 00:13:56.143871 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:13:56.143967 | orchestrator | 2026-03-11 00:13:56.143983 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-11 00:13:57.037795 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-11 00:13:57.037875 | orchestrator | changed: [testbed-manager] 2026-03-11 00:13:57.037888 | orchestrator | 2026-03-11 00:13:57.037899 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-11 00:13:57.085568 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:13:57.085673 | orchestrator | 2026-03-11 00:13:57.085696 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-11 00:13:57.124189 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:13:57.124282 | orchestrator | 2026-03-11 00:13:57.124298 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-11 00:13:57.162576 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:13:57.162635 | orchestrator | 2026-03-11 00:13:57.162649 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-11 00:13:57.243190 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:13:57.243280 | orchestrator | 2026-03-11 00:13:57.243296 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-11 00:13:57.972864 | orchestrator | ok: [testbed-manager] 2026-03-11 00:13:57.972926 | orchestrator | 2026-03-11 00:13:57.972934 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-11 00:13:57.972943 | orchestrator | 2026-03-11 00:13:57.972950 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:13:59.321111 | orchestrator | ok: [testbed-manager] 2026-03-11 00:13:59.321198 | orchestrator | 2026-03-11 00:13:59.321214 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-11 00:14:00.222159 | orchestrator | changed: [testbed-manager] 2026-03-11 00:14:00.223115 | orchestrator | 2026-03-11 00:14:00.223178 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:14:00.223195 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-11 00:14:00.223207 | orchestrator | 2026-03-11 00:14:00.711717 | orchestrator | ok: Runtime: 0:08:45.185110 2026-03-11 00:14:00.733297 | 2026-03-11 00:14:00.733470 | TASK [Point out that the log in on the manager is now possible] 2026-03-11 00:14:00.784730 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-11 00:14:00.796270 | 2026-03-11 00:14:00.796441 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-11 00:14:00.841881 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-11 00:14:00.852333 | 2026-03-11 00:14:00.852468 | TASK [Run manager part 1 + 2] 2026-03-11 00:14:01.771720 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-11 00:14:01.825858 | orchestrator | 2026-03-11 00:14:01.825915 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-11 00:14:01.825922 | orchestrator | 2026-03-11 00:14:01.825938 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:14:04.721266 | orchestrator | ok: [testbed-manager] 2026-03-11 00:14:04.721345 | orchestrator | 2026-03-11 00:14:04.721410 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-11 00:14:04.759253 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:14:04.759309 | orchestrator | 2026-03-11 00:14:04.759319 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-11 00:14:04.791966 | orchestrator | ok: [testbed-manager] 2026-03-11 00:14:04.792015 | orchestrator | 2026-03-11 00:14:04.792023 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-11 00:14:04.826179 | orchestrator | ok: [testbed-manager] 2026-03-11 00:14:04.826241 | orchestrator | 2026-03-11 00:14:04.826252 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-11 00:14:04.898835 | orchestrator | ok: [testbed-manager] 2026-03-11 00:14:04.898892 | orchestrator | 2026-03-11 00:14:04.898900 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-11 00:14:04.963875 | orchestrator | ok: [testbed-manager] 2026-03-11 00:14:04.963930 | orchestrator | 2026-03-11 00:14:04.963938 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-11 00:14:05.017462 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-11 00:14:05.017524 | orchestrator | 2026-03-11 00:14:05.017532 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-11 00:14:05.724963 | orchestrator | ok: [testbed-manager] 2026-03-11 00:14:05.725033 | orchestrator | 2026-03-11 00:14:05.725045 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-11 00:14:05.782941 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:14:05.782994 | orchestrator | 2026-03-11 00:14:05.783000 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-11 00:14:07.205222 | orchestrator | changed: [testbed-manager] 2026-03-11 00:14:07.205311 | orchestrator | 2026-03-11 00:14:07.205325 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-11 00:14:07.782637 | orchestrator | ok: [testbed-manager] 2026-03-11 00:14:07.782732 | orchestrator | 2026-03-11 00:14:07.782749 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-11 00:14:08.916178 | orchestrator | changed: [testbed-manager] 2026-03-11 00:14:08.916280 | orchestrator | 2026-03-11 00:14:08.916310 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-11 00:14:25.264648 | orchestrator | changed: [testbed-manager] 2026-03-11 00:14:25.264749 | orchestrator | 2026-03-11 00:14:25.264762 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-11 00:14:25.913934 | orchestrator | ok: [testbed-manager] 2026-03-11 00:14:25.914669 | orchestrator | 2026-03-11 00:14:25.914721 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-11 00:14:25.967001 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:14:25.967072 | orchestrator | 2026-03-11 00:14:25.967082 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-11 00:14:26.948692 | orchestrator | changed: [testbed-manager] 2026-03-11 00:14:26.948806 | orchestrator | 2026-03-11 00:14:26.948821 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-11 00:14:27.886794 | orchestrator | changed: [testbed-manager] 2026-03-11 00:14:27.886871 | orchestrator | 2026-03-11 00:14:27.886880 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-11 00:14:28.456690 | orchestrator | changed: [testbed-manager] 2026-03-11 00:14:28.456792 | orchestrator | 2026-03-11 00:14:28.456809 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-11 00:14:28.496907 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-11 00:14:28.497066 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-11 00:14:28.497083 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-11 00:14:28.497096 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-11 00:14:30.459691 | orchestrator | changed: [testbed-manager] 2026-03-11 00:14:30.459774 | orchestrator | 2026-03-11 00:14:30.459790 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-11 00:14:39.080836 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-11 00:14:39.080909 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-11 00:14:39.080927 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-11 00:14:39.080940 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-11 00:14:39.080961 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-11 00:14:39.080973 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-11 00:14:39.080984 | orchestrator | 2026-03-11 00:14:39.080997 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-11 00:14:40.145073 | orchestrator | changed: [testbed-manager] 2026-03-11 00:14:40.145118 | orchestrator | 2026-03-11 00:14:40.145128 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-11 00:14:40.186046 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:14:40.186084 | orchestrator | 2026-03-11 00:14:40.186091 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-11 00:14:43.160809 | orchestrator | changed: [testbed-manager] 2026-03-11 00:14:43.160903 | orchestrator | 2026-03-11 00:14:43.160918 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-11 00:14:43.196943 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:14:43.197045 | orchestrator | 2026-03-11 00:14:43.197060 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-11 00:16:26.724564 | orchestrator | changed: [testbed-manager] 2026-03-11 00:16:26.724643 | orchestrator | 2026-03-11 00:16:26.724655 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-11 00:16:27.710554 | orchestrator | ok: [testbed-manager] 2026-03-11 00:16:27.710645 | orchestrator | 2026-03-11 00:16:27.710663 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:16:27.710691 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-11 00:16:27.710703 | orchestrator | 2026-03-11 00:16:28.002978 | orchestrator | ok: Runtime: 0:02:26.626958 2026-03-11 00:16:28.020178 | 2026-03-11 00:16:28.020327 | TASK [Reboot manager] 2026-03-11 00:16:29.558288 | orchestrator | ok: Runtime: 0:00:00.892962 2026-03-11 00:16:29.576027 | 2026-03-11 00:16:29.576192 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-11 00:16:42.578901 | orchestrator | ok 2026-03-11 00:16:42.590823 | 2026-03-11 00:16:42.591000 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-11 00:17:42.641794 | orchestrator | ok 2026-03-11 00:17:42.651025 | 2026-03-11 00:17:42.651153 | TASK [Deploy manager + bootstrap nodes] 2026-03-11 00:17:45.019783 | orchestrator | 2026-03-11 00:17:45.020051 | orchestrator | # DEPLOY MANAGER 2026-03-11 00:17:45.020079 | orchestrator | 2026-03-11 00:17:45.020094 | orchestrator | + set -e 2026-03-11 00:17:45.020107 | orchestrator | + echo 2026-03-11 00:17:45.020121 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-11 00:17:45.020139 | orchestrator | + echo 2026-03-11 00:17:45.020191 | orchestrator | + cat /opt/manager-vars.sh 2026-03-11 00:17:45.023303 | orchestrator | export NUMBER_OF_NODES=6 2026-03-11 00:17:45.023344 | orchestrator | 2026-03-11 00:17:45.023357 | orchestrator | export CEPH_VERSION=reef 2026-03-11 00:17:45.023370 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-11 00:17:45.023384 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-11 00:17:45.023409 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-11 00:17:45.023420 | orchestrator | 2026-03-11 00:17:45.023439 | orchestrator | export ARA=false 2026-03-11 00:17:45.023451 | orchestrator | export DEPLOY_MODE=manager 2026-03-11 00:17:45.023469 | orchestrator | export TEMPEST=true 2026-03-11 00:17:45.023481 | orchestrator | export IS_ZUUL=true 2026-03-11 00:17:45.023492 | orchestrator | 2026-03-11 00:17:45.023511 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.101 2026-03-11 00:17:45.023523 | orchestrator | export EXTERNAL_API=false 2026-03-11 00:17:45.023534 | orchestrator | 2026-03-11 00:17:45.023545 | orchestrator | export IMAGE_USER=ubuntu 2026-03-11 00:17:45.023561 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-11 00:17:45.023572 | orchestrator | 2026-03-11 00:17:45.023584 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-11 00:17:45.023603 | orchestrator | 2026-03-11 00:17:45.023614 | orchestrator | + echo 2026-03-11 00:17:45.023632 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-11 00:17:45.024437 | orchestrator | ++ export INTERACTIVE=false 2026-03-11 00:17:45.024457 | orchestrator | ++ INTERACTIVE=false 2026-03-11 00:17:45.024470 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-11 00:17:45.024484 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-11 00:17:45.024641 | orchestrator | + source /opt/manager-vars.sh 2026-03-11 00:17:45.024656 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-11 00:17:45.024668 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-11 00:17:45.024679 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-11 00:17:45.024694 | orchestrator | ++ CEPH_VERSION=reef 2026-03-11 00:17:45.024706 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-11 00:17:45.024718 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-11 00:17:45.024729 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-11 00:17:45.024740 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-11 00:17:45.024840 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-11 00:17:45.024891 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-11 00:17:45.024904 | orchestrator | ++ export ARA=false 2026-03-11 00:17:45.024915 | orchestrator | ++ ARA=false 2026-03-11 00:17:45.024930 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-11 00:17:45.024942 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-11 00:17:45.024953 | orchestrator | ++ export TEMPEST=true 2026-03-11 00:17:45.024964 | orchestrator | ++ TEMPEST=true 2026-03-11 00:17:45.024975 | orchestrator | ++ export IS_ZUUL=true 2026-03-11 00:17:45.024986 | orchestrator | ++ IS_ZUUL=true 2026-03-11 00:17:45.024997 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.101 2026-03-11 00:17:45.025009 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.101 2026-03-11 00:17:45.025020 | orchestrator | ++ export EXTERNAL_API=false 2026-03-11 00:17:45.025031 | orchestrator | ++ EXTERNAL_API=false 2026-03-11 00:17:45.025042 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-11 00:17:45.025053 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-11 00:17:45.025067 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-11 00:17:45.025079 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-11 00:17:45.025090 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-11 00:17:45.025105 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-11 00:17:45.025117 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-11 00:17:45.078298 | orchestrator | + docker version 2026-03-11 00:17:45.193762 | orchestrator | Client: Docker Engine - Community 2026-03-11 00:17:45.193883 | orchestrator | Version: 27.5.1 2026-03-11 00:17:45.193901 | orchestrator | API version: 1.47 2026-03-11 00:17:45.193915 | orchestrator | Go version: go1.22.11 2026-03-11 00:17:45.193927 | orchestrator | Git commit: 9f9e405 2026-03-11 00:17:45.193939 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-11 00:17:45.193951 | orchestrator | OS/Arch: linux/amd64 2026-03-11 00:17:45.193962 | orchestrator | Context: default 2026-03-11 00:17:45.193973 | orchestrator | 2026-03-11 00:17:45.193985 | orchestrator | Server: Docker Engine - Community 2026-03-11 00:17:45.193996 | orchestrator | Engine: 2026-03-11 00:17:45.194007 | orchestrator | Version: 27.5.1 2026-03-11 00:17:45.194077 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-11 00:17:45.194118 | orchestrator | Go version: go1.22.11 2026-03-11 00:17:45.194130 | orchestrator | Git commit: 4c9b3b0 2026-03-11 00:17:45.194141 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-11 00:17:45.194152 | orchestrator | OS/Arch: linux/amd64 2026-03-11 00:17:45.194163 | orchestrator | Experimental: false 2026-03-11 00:17:45.194174 | orchestrator | containerd: 2026-03-11 00:17:45.194185 | orchestrator | Version: v2.2.1 2026-03-11 00:17:45.194197 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-11 00:17:45.194208 | orchestrator | runc: 2026-03-11 00:17:45.194219 | orchestrator | Version: 1.3.4 2026-03-11 00:17:45.194230 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-11 00:17:45.194241 | orchestrator | docker-init: 2026-03-11 00:17:45.194252 | orchestrator | Version: 0.19.0 2026-03-11 00:17:45.194264 | orchestrator | GitCommit: de40ad0 2026-03-11 00:17:45.195605 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-11 00:17:45.204384 | orchestrator | + set -e 2026-03-11 00:17:45.204435 | orchestrator | + source /opt/manager-vars.sh 2026-03-11 00:17:45.204448 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-11 00:17:45.204461 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-11 00:17:45.204472 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-11 00:17:45.204483 | orchestrator | ++ CEPH_VERSION=reef 2026-03-11 00:17:45.204494 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-11 00:17:45.204506 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-11 00:17:45.204517 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-11 00:17:45.204528 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-11 00:17:45.204539 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-11 00:17:45.204550 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-11 00:17:45.204561 | orchestrator | ++ export ARA=false 2026-03-11 00:17:45.204572 | orchestrator | ++ ARA=false 2026-03-11 00:17:45.204583 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-11 00:17:45.204594 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-11 00:17:45.204605 | orchestrator | ++ export TEMPEST=true 2026-03-11 00:17:45.204616 | orchestrator | ++ TEMPEST=true 2026-03-11 00:17:45.204626 | orchestrator | ++ export IS_ZUUL=true 2026-03-11 00:17:45.204637 | orchestrator | ++ IS_ZUUL=true 2026-03-11 00:17:45.204648 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.101 2026-03-11 00:17:45.204660 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.101 2026-03-11 00:17:45.204670 | orchestrator | ++ export EXTERNAL_API=false 2026-03-11 00:17:45.204681 | orchestrator | ++ EXTERNAL_API=false 2026-03-11 00:17:45.204692 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-11 00:17:45.204703 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-11 00:17:45.204714 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-11 00:17:45.204725 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-11 00:17:45.204736 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-11 00:17:45.204747 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-11 00:17:45.204758 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-11 00:17:45.204768 | orchestrator | ++ export INTERACTIVE=false 2026-03-11 00:17:45.204779 | orchestrator | ++ INTERACTIVE=false 2026-03-11 00:17:45.204790 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-11 00:17:45.204805 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-11 00:17:45.204816 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-11 00:17:45.204827 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-11 00:17:45.210356 | orchestrator | + set -e 2026-03-11 00:17:45.210403 | orchestrator | + VERSION=9.5.0 2026-03-11 00:17:45.210417 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-11 00:17:45.219241 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-11 00:17:45.219269 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-11 00:17:45.223143 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-11 00:17:45.226616 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-11 00:17:45.234463 | orchestrator | /opt/configuration ~ 2026-03-11 00:17:45.234501 | orchestrator | + set -e 2026-03-11 00:17:45.234513 | orchestrator | + pushd /opt/configuration 2026-03-11 00:17:45.234525 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-11 00:17:45.237354 | orchestrator | + source /opt/venv/bin/activate 2026-03-11 00:17:45.238565 | orchestrator | ++ deactivate nondestructive 2026-03-11 00:17:45.238586 | orchestrator | ++ '[' -n '' ']' 2026-03-11 00:17:45.238601 | orchestrator | ++ '[' -n '' ']' 2026-03-11 00:17:45.238638 | orchestrator | ++ hash -r 2026-03-11 00:17:45.238650 | orchestrator | ++ '[' -n '' ']' 2026-03-11 00:17:45.238661 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-11 00:17:45.238671 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-11 00:17:45.238682 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-11 00:17:45.238699 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-11 00:17:45.238711 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-11 00:17:45.238722 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-11 00:17:45.238732 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-11 00:17:45.238744 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-11 00:17:45.238756 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-11 00:17:45.238767 | orchestrator | ++ export PATH 2026-03-11 00:17:45.238778 | orchestrator | ++ '[' -n '' ']' 2026-03-11 00:17:45.238793 | orchestrator | ++ '[' -z '' ']' 2026-03-11 00:17:45.238804 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-11 00:17:45.238815 | orchestrator | ++ PS1='(venv) ' 2026-03-11 00:17:45.238826 | orchestrator | ++ export PS1 2026-03-11 00:17:45.238837 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-11 00:17:45.238851 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-11 00:17:45.238891 | orchestrator | ++ hash -r 2026-03-11 00:17:45.238907 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-11 00:17:46.258595 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-11 00:17:46.259645 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-11 00:17:46.260742 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-11 00:17:46.261945 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-11 00:17:46.263026 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-11 00:17:46.272662 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-11 00:17:46.274003 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-11 00:17:46.275273 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-11 00:17:46.276520 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-11 00:17:46.305775 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.5) 2026-03-11 00:17:46.307035 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-11 00:17:46.308613 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-11 00:17:46.310069 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-11 00:17:46.313901 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-11 00:17:46.509400 | orchestrator | ++ which gilt 2026-03-11 00:17:46.512653 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-11 00:17:46.512703 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-11 00:17:46.710433 | orchestrator | osism.cfg-generics: 2026-03-11 00:17:46.812667 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-11 00:17:46.813214 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-11 00:17:46.814225 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-11 00:17:46.814271 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-11 00:17:47.303814 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-11 00:17:47.316026 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-11 00:17:47.603107 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-11 00:17:47.646487 | orchestrator | ~ 2026-03-11 00:17:47.646614 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-11 00:17:47.646633 | orchestrator | + deactivate 2026-03-11 00:17:47.646646 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-11 00:17:47.646660 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-11 00:17:47.646671 | orchestrator | + export PATH 2026-03-11 00:17:47.646683 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-11 00:17:47.646695 | orchestrator | + '[' -n '' ']' 2026-03-11 00:17:47.646738 | orchestrator | + hash -r 2026-03-11 00:17:47.646750 | orchestrator | + '[' -n '' ']' 2026-03-11 00:17:47.646761 | orchestrator | + unset VIRTUAL_ENV 2026-03-11 00:17:47.646772 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-11 00:17:47.646783 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-11 00:17:47.646795 | orchestrator | + unset -f deactivate 2026-03-11 00:17:47.646806 | orchestrator | + popd 2026-03-11 00:17:47.647088 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-11 00:17:47.647114 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-11 00:17:47.647526 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-11 00:17:47.687475 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-11 00:17:47.687555 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-11 00:17:47.688041 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-11 00:17:47.722597 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-11 00:17:47.722907 | orchestrator | ++ semver 2024.2 2025.1 2026-03-11 00:17:47.755338 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-11 00:17:47.755489 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-11 00:17:47.820111 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-11 00:17:47.820179 | orchestrator | + source /opt/venv/bin/activate 2026-03-11 00:17:47.820189 | orchestrator | ++ deactivate nondestructive 2026-03-11 00:17:47.820196 | orchestrator | ++ '[' -n '' ']' 2026-03-11 00:17:47.820203 | orchestrator | ++ '[' -n '' ']' 2026-03-11 00:17:47.820209 | orchestrator | ++ hash -r 2026-03-11 00:17:47.820216 | orchestrator | ++ '[' -n '' ']' 2026-03-11 00:17:47.820223 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-11 00:17:47.820229 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-11 00:17:47.820236 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-11 00:17:47.820243 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-11 00:17:47.820249 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-11 00:17:47.820256 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-11 00:17:47.820262 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-11 00:17:47.820270 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-11 00:17:47.820290 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-11 00:17:47.820297 | orchestrator | ++ export PATH 2026-03-11 00:17:47.820304 | orchestrator | ++ '[' -n '' ']' 2026-03-11 00:17:47.820310 | orchestrator | ++ '[' -z '' ']' 2026-03-11 00:17:47.820317 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-11 00:17:47.820323 | orchestrator | ++ PS1='(venv) ' 2026-03-11 00:17:47.820329 | orchestrator | ++ export PS1 2026-03-11 00:17:47.820335 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-11 00:17:47.820341 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-11 00:17:47.820348 | orchestrator | ++ hash -r 2026-03-11 00:17:47.820415 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-11 00:17:48.895411 | orchestrator | 2026-03-11 00:17:48.895523 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-11 00:17:48.895540 | orchestrator | 2026-03-11 00:17:48.895552 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-11 00:17:49.393644 | orchestrator | ok: [testbed-manager] 2026-03-11 00:17:49.393740 | orchestrator | 2026-03-11 00:17:49.393759 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-11 00:17:50.221145 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:50.221252 | orchestrator | 2026-03-11 00:17:50.221270 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-11 00:17:50.221315 | orchestrator | 2026-03-11 00:17:50.221327 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:17:52.147151 | orchestrator | ok: [testbed-manager] 2026-03-11 00:17:52.147250 | orchestrator | 2026-03-11 00:17:52.147267 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-11 00:17:52.196684 | orchestrator | ok: [testbed-manager] 2026-03-11 00:17:52.196780 | orchestrator | 2026-03-11 00:17:52.196798 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-11 00:17:52.594242 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:52.594352 | orchestrator | 2026-03-11 00:17:52.594372 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-11 00:17:52.628444 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:17:52.628545 | orchestrator | 2026-03-11 00:17:52.628562 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-11 00:17:52.932374 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:52.932476 | orchestrator | 2026-03-11 00:17:52.932493 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-11 00:17:53.255902 | orchestrator | ok: [testbed-manager] 2026-03-11 00:17:53.256024 | orchestrator | 2026-03-11 00:17:53.256042 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-11 00:17:53.363082 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:17:53.363180 | orchestrator | 2026-03-11 00:17:53.363195 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-11 00:17:53.363207 | orchestrator | 2026-03-11 00:17:53.363218 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:17:54.917668 | orchestrator | ok: [testbed-manager] 2026-03-11 00:17:54.917778 | orchestrator | 2026-03-11 00:17:54.917798 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-11 00:17:54.994088 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-11 00:17:54.994220 | orchestrator | 2026-03-11 00:17:54.994250 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-11 00:17:55.040235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-11 00:17:55.040318 | orchestrator | 2026-03-11 00:17:55.040332 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-11 00:17:55.960469 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-11 00:17:55.960592 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-11 00:17:55.960616 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-11 00:17:55.960637 | orchestrator | 2026-03-11 00:17:55.960661 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-11 00:17:57.497149 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-11 00:17:57.497261 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-11 00:17:57.497286 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-11 00:17:57.497308 | orchestrator | 2026-03-11 00:17:57.497330 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-11 00:17:58.088277 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-11 00:17:58.088378 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:58.088395 | orchestrator | 2026-03-11 00:17:58.088408 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-11 00:17:58.673779 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-11 00:17:58.673992 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:58.674012 | orchestrator | 2026-03-11 00:17:58.674062 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-11 00:17:58.715986 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:17:58.716073 | orchestrator | 2026-03-11 00:17:58.716089 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-11 00:17:59.037453 | orchestrator | ok: [testbed-manager] 2026-03-11 00:17:59.037552 | orchestrator | 2026-03-11 00:17:59.037568 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-11 00:17:59.101876 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-11 00:17:59.101984 | orchestrator | 2026-03-11 00:17:59.101999 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-11 00:18:00.052142 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:00.052246 | orchestrator | 2026-03-11 00:18:00.052264 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-11 00:18:00.798967 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:00.799068 | orchestrator | 2026-03-11 00:18:00.799085 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-11 00:18:12.900885 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:12.900999 | orchestrator | 2026-03-11 00:18:12.901018 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-11 00:18:12.945957 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:18:12.946112 | orchestrator | 2026-03-11 00:18:12.946155 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-11 00:18:12.946169 | orchestrator | 2026-03-11 00:18:12.946181 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:18:14.673917 | orchestrator | ok: [testbed-manager] 2026-03-11 00:18:14.674004 | orchestrator | 2026-03-11 00:18:14.674060 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-11 00:18:14.777259 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-11 00:18:14.777331 | orchestrator | 2026-03-11 00:18:14.777340 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-11 00:18:14.835991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-11 00:18:14.836086 | orchestrator | 2026-03-11 00:18:14.836102 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-11 00:18:17.221459 | orchestrator | ok: [testbed-manager] 2026-03-11 00:18:17.221568 | orchestrator | 2026-03-11 00:18:17.221587 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-11 00:18:17.279326 | orchestrator | ok: [testbed-manager] 2026-03-11 00:18:17.279432 | orchestrator | 2026-03-11 00:18:17.279467 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-11 00:18:17.414303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-11 00:18:17.414398 | orchestrator | 2026-03-11 00:18:17.414413 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-11 00:18:20.088852 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-11 00:18:20.088965 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-11 00:18:20.088981 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-11 00:18:20.088994 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-11 00:18:20.089005 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-11 00:18:20.089016 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-11 00:18:20.089027 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-11 00:18:20.089038 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-11 00:18:20.089050 | orchestrator | 2026-03-11 00:18:20.089063 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-11 00:18:20.690322 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:20.690423 | orchestrator | 2026-03-11 00:18:20.690444 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-11 00:18:21.295685 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:21.295876 | orchestrator | 2026-03-11 00:18:21.295929 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-11 00:18:21.366241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-11 00:18:21.366340 | orchestrator | 2026-03-11 00:18:21.366356 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-11 00:18:22.533690 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-11 00:18:22.533871 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-11 00:18:22.533890 | orchestrator | 2026-03-11 00:18:22.533903 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-11 00:18:23.150746 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:23.150886 | orchestrator | 2026-03-11 00:18:23.150904 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-11 00:18:23.198970 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:18:23.199060 | orchestrator | 2026-03-11 00:18:23.199075 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-11 00:18:23.277355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-11 00:18:23.277451 | orchestrator | 2026-03-11 00:18:23.277467 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-11 00:18:23.907903 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:23.908006 | orchestrator | 2026-03-11 00:18:23.908023 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-11 00:18:23.963890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-11 00:18:23.963984 | orchestrator | 2026-03-11 00:18:23.963999 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-11 00:18:25.277030 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-11 00:18:25.277128 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-11 00:18:25.277141 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:25.277152 | orchestrator | 2026-03-11 00:18:25.277163 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-11 00:18:25.871469 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:25.871569 | orchestrator | 2026-03-11 00:18:25.871585 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-11 00:18:25.927147 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:18:25.927240 | orchestrator | 2026-03-11 00:18:25.927256 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-11 00:18:26.016210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-11 00:18:26.016312 | orchestrator | 2026-03-11 00:18:26.016329 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-11 00:18:26.527214 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:26.527336 | orchestrator | 2026-03-11 00:18:26.527354 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-11 00:18:26.907837 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:26.907936 | orchestrator | 2026-03-11 00:18:26.907953 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-11 00:18:27.981350 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-11 00:18:27.981457 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-11 00:18:27.981496 | orchestrator | 2026-03-11 00:18:27.981510 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-11 00:18:28.589166 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:28.589268 | orchestrator | 2026-03-11 00:18:28.589284 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-11 00:18:28.925847 | orchestrator | ok: [testbed-manager] 2026-03-11 00:18:28.925951 | orchestrator | 2026-03-11 00:18:28.925967 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-11 00:18:29.249055 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:29.249153 | orchestrator | 2026-03-11 00:18:29.249170 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-11 00:18:29.290898 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:18:29.290994 | orchestrator | 2026-03-11 00:18:29.291011 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-11 00:18:29.362116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-11 00:18:29.362238 | orchestrator | 2026-03-11 00:18:29.362254 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-11 00:18:29.403912 | orchestrator | ok: [testbed-manager] 2026-03-11 00:18:29.403999 | orchestrator | 2026-03-11 00:18:29.404014 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-11 00:18:31.244617 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-11 00:18:31.244724 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-11 00:18:31.244769 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-11 00:18:31.244782 | orchestrator | 2026-03-11 00:18:31.244794 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-11 00:18:31.850992 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:31.851091 | orchestrator | 2026-03-11 00:18:31.851106 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-11 00:18:32.409171 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:32.409269 | orchestrator | 2026-03-11 00:18:32.409286 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-11 00:18:33.000977 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:33.001077 | orchestrator | 2026-03-11 00:18:33.001095 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-11 00:18:33.053604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-11 00:18:33.053700 | orchestrator | 2026-03-11 00:18:33.053720 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-11 00:18:33.084339 | orchestrator | ok: [testbed-manager] 2026-03-11 00:18:33.084449 | orchestrator | 2026-03-11 00:18:33.084465 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-11 00:18:33.641371 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-11 00:18:33.641471 | orchestrator | 2026-03-11 00:18:33.641486 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-11 00:18:33.727418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-11 00:18:33.727508 | orchestrator | 2026-03-11 00:18:33.727524 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-11 00:18:34.383902 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:34.383995 | orchestrator | 2026-03-11 00:18:34.384011 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-11 00:18:34.975425 | orchestrator | ok: [testbed-manager] 2026-03-11 00:18:34.975494 | orchestrator | 2026-03-11 00:18:34.975500 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-11 00:18:35.034890 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:18:35.034973 | orchestrator | 2026-03-11 00:18:35.034985 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-11 00:18:35.097602 | orchestrator | ok: [testbed-manager] 2026-03-11 00:18:35.097694 | orchestrator | 2026-03-11 00:18:35.097709 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-11 00:18:35.917604 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:35.917699 | orchestrator | 2026-03-11 00:18:35.917716 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-11 00:19:35.221903 | orchestrator | changed: [testbed-manager] 2026-03-11 00:19:35.222008 | orchestrator | 2026-03-11 00:19:35.222058 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-11 00:19:36.069029 | orchestrator | ok: [testbed-manager] 2026-03-11 00:19:36.069133 | orchestrator | 2026-03-11 00:19:36.069150 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-11 00:19:36.124666 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:19:36.124767 | orchestrator | 2026-03-11 00:19:36.124784 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-11 00:19:38.180865 | orchestrator | changed: [testbed-manager] 2026-03-11 00:19:38.180970 | orchestrator | 2026-03-11 00:19:38.180987 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-11 00:19:38.239923 | orchestrator | ok: [testbed-manager] 2026-03-11 00:19:38.240027 | orchestrator | 2026-03-11 00:19:38.240042 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-11 00:19:38.240055 | orchestrator | 2026-03-11 00:19:38.240066 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-11 00:19:38.343344 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:19:38.343440 | orchestrator | 2026-03-11 00:19:38.343455 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-11 00:20:38.384238 | orchestrator | Pausing for 60 seconds 2026-03-11 00:20:38.384402 | orchestrator | changed: [testbed-manager] 2026-03-11 00:20:38.385258 | orchestrator | 2026-03-11 00:20:38.385306 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-11 00:20:40.921034 | orchestrator | changed: [testbed-manager] 2026-03-11 00:20:40.921140 | orchestrator | 2026-03-11 00:20:40.921157 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-11 00:21:22.407338 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-11 00:21:22.407490 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-11 00:21:22.407507 | orchestrator | changed: [testbed-manager] 2026-03-11 00:21:22.407522 | orchestrator | 2026-03-11 00:21:22.407557 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-11 00:21:32.142464 | orchestrator | changed: [testbed-manager] 2026-03-11 00:21:32.142575 | orchestrator | 2026-03-11 00:21:32.142590 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-11 00:21:32.224479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-11 00:21:32.224577 | orchestrator | 2026-03-11 00:21:32.224592 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-11 00:21:32.224605 | orchestrator | 2026-03-11 00:21:32.224617 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-11 00:21:32.278761 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:21:32.278840 | orchestrator | 2026-03-11 00:21:32.278853 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-11 00:21:32.370809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-11 00:21:32.370907 | orchestrator | 2026-03-11 00:21:32.370924 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-11 00:21:33.113667 | orchestrator | changed: [testbed-manager] 2026-03-11 00:21:33.113773 | orchestrator | 2026-03-11 00:21:33.113791 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-11 00:21:36.250774 | orchestrator | ok: [testbed-manager] 2026-03-11 00:21:36.250874 | orchestrator | 2026-03-11 00:21:36.250891 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-11 00:21:36.322239 | orchestrator | ok: [testbed-manager] => { 2026-03-11 00:21:36.322400 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-11 00:21:36.322420 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-11 00:21:36.322432 | orchestrator | "Checking running containers against expected versions...", 2026-03-11 00:21:36.322445 | orchestrator | "", 2026-03-11 00:21:36.322457 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-11 00:21:36.322468 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-11 00:21:36.322480 | orchestrator | " Enabled: true", 2026-03-11 00:21:36.322492 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-11 00:21:36.322503 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:21:36.322515 | orchestrator | "", 2026-03-11 00:21:36.322526 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-11 00:21:36.322537 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-11 00:21:36.322549 | orchestrator | " Enabled: true", 2026-03-11 00:21:36.322583 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-11 00:21:36.322596 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:21:36.322606 | orchestrator | "", 2026-03-11 00:21:36.322618 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-11 00:21:36.322629 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-11 00:21:36.322640 | orchestrator | " Enabled: true", 2026-03-11 00:21:36.322651 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-11 00:21:36.322663 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:21:36.322674 | orchestrator | "", 2026-03-11 00:21:36.322685 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-11 00:21:36.322696 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-11 00:21:36.322707 | orchestrator | " Enabled: true", 2026-03-11 00:21:36.322718 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-11 00:21:36.322729 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:21:36.322741 | orchestrator | "", 2026-03-11 00:21:36.322753 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-11 00:21:36.322768 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-11 00:21:36.322780 | orchestrator | " Enabled: true", 2026-03-11 00:21:36.322793 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-11 00:21:36.322805 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:21:36.322817 | orchestrator | "", 2026-03-11 00:21:36.322829 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-11 00:21:36.322842 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-11 00:21:36.322854 | orchestrator | " Enabled: true", 2026-03-11 00:21:36.322866 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-11 00:21:36.322879 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:21:36.322892 | orchestrator | "", 2026-03-11 00:21:36.322904 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-11 00:21:36.322917 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-11 00:21:36.322929 | orchestrator | " Enabled: true", 2026-03-11 00:21:36.322942 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-11 00:21:36.322955 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:21:36.322967 | orchestrator | "", 2026-03-11 00:21:36.322980 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-11 00:21:36.322992 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-11 00:21:36.323005 | orchestrator | " Enabled: true", 2026-03-11 00:21:36.323017 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-11 00:21:36.323029 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:21:36.323042 | orchestrator | "", 2026-03-11 00:21:36.323054 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-11 00:21:36.323067 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-11 00:21:36.323079 | orchestrator | " Enabled: true", 2026-03-11 00:21:36.323092 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-11 00:21:36.323105 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:21:36.323116 | orchestrator | "", 2026-03-11 00:21:36.323126 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-11 00:21:36.323137 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-11 00:21:36.323148 | orchestrator | " Enabled: true", 2026-03-11 00:21:36.323159 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-11 00:21:36.323170 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:21:36.323181 | orchestrator | "", 2026-03-11 00:21:36.323191 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-11 00:21:36.323218 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-11 00:21:36.323229 | orchestrator | " Enabled: true", 2026-03-11 00:21:36.323260 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-11 00:21:36.323271 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:21:36.323282 | orchestrator | "", 2026-03-11 00:21:36.323293 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-11 00:21:36.323304 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-11 00:21:36.323347 | orchestrator | " Enabled: true", 2026-03-11 00:21:36.323359 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-11 00:21:36.323370 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:21:36.323381 | orchestrator | "", 2026-03-11 00:21:36.323393 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-11 00:21:36.323403 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-11 00:21:36.323414 | orchestrator | " Enabled: true", 2026-03-11 00:21:36.323425 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-11 00:21:36.323436 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:21:36.323446 | orchestrator | "", 2026-03-11 00:21:36.323457 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-11 00:21:36.323468 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-11 00:21:36.323479 | orchestrator | " Enabled: true", 2026-03-11 00:21:36.323490 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-11 00:21:36.323519 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:21:36.323530 | orchestrator | "", 2026-03-11 00:21:36.323541 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-11 00:21:36.323552 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-11 00:21:36.323563 | orchestrator | " Enabled: true", 2026-03-11 00:21:36.323584 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-11 00:21:36.323595 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:21:36.323606 | orchestrator | "", 2026-03-11 00:21:36.323617 | orchestrator | "=== Summary ===", 2026-03-11 00:21:36.323628 | orchestrator | "Errors (version mismatches): 0", 2026-03-11 00:21:36.323639 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-11 00:21:36.323650 | orchestrator | "", 2026-03-11 00:21:36.323661 | orchestrator | "✅ All running containers match expected versions!" 2026-03-11 00:21:36.323672 | orchestrator | ] 2026-03-11 00:21:36.323683 | orchestrator | } 2026-03-11 00:21:36.323695 | orchestrator | 2026-03-11 00:21:36.323706 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-11 00:21:36.369857 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:21:36.369946 | orchestrator | 2026-03-11 00:21:36.369957 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:21:36.369966 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-11 00:21:36.369973 | orchestrator | 2026-03-11 00:21:36.470006 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-11 00:21:36.470122 | orchestrator | + deactivate 2026-03-11 00:21:36.470131 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-11 00:21:36.470139 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-11 00:21:36.470145 | orchestrator | + export PATH 2026-03-11 00:21:36.470151 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-11 00:21:36.470157 | orchestrator | + '[' -n '' ']' 2026-03-11 00:21:36.470163 | orchestrator | + hash -r 2026-03-11 00:21:36.470169 | orchestrator | + '[' -n '' ']' 2026-03-11 00:21:36.470174 | orchestrator | + unset VIRTUAL_ENV 2026-03-11 00:21:36.470180 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-11 00:21:36.470186 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-11 00:21:36.470191 | orchestrator | + unset -f deactivate 2026-03-11 00:21:36.470198 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-11 00:21:36.478641 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-11 00:21:36.478693 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-11 00:21:36.478701 | orchestrator | + local max_attempts=60 2026-03-11 00:21:36.478709 | orchestrator | + local name=ceph-ansible 2026-03-11 00:21:36.478736 | orchestrator | + local attempt_num=1 2026-03-11 00:21:36.479311 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:21:36.508210 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:21:36.508267 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-11 00:21:36.508276 | orchestrator | + local max_attempts=60 2026-03-11 00:21:36.508285 | orchestrator | + local name=kolla-ansible 2026-03-11 00:21:36.508293 | orchestrator | + local attempt_num=1 2026-03-11 00:21:36.509024 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-11 00:21:36.545915 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:21:36.546006 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-11 00:21:36.546076 | orchestrator | + local max_attempts=60 2026-03-11 00:21:36.546090 | orchestrator | + local name=osism-ansible 2026-03-11 00:21:36.546102 | orchestrator | + local attempt_num=1 2026-03-11 00:21:36.546609 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-11 00:21:36.579019 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:21:36.579098 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-11 00:21:36.579112 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-11 00:21:37.185300 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-11 00:21:37.351873 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-11 00:21:37.351970 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-11 00:21:37.351984 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-11 00:21:37.351996 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-11 00:21:37.352010 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-03-11 00:21:37.352043 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-03-11 00:21:37.352055 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-03-11 00:21:37.352066 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 56 seconds (healthy) 2026-03-11 00:21:37.352077 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-03-11 00:21:37.352088 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-03-11 00:21:37.352099 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-03-11 00:21:37.352110 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-03-11 00:21:37.352121 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-11 00:21:37.352164 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-11 00:21:37.352176 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-11 00:21:37.352187 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-03-11 00:21:37.357538 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-11 00:21:37.403717 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-11 00:21:37.403799 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-11 00:21:37.408355 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-11 00:21:49.664364 | orchestrator | 2026-03-11 00:21:49 | INFO  | Task 97f274b3-85f1-470e-9dcb-124abc53ec7e (resolvconf) was prepared for execution. 2026-03-11 00:21:49.664476 | orchestrator | 2026-03-11 00:21:49 | INFO  | It takes a moment until task 97f274b3-85f1-470e-9dcb-124abc53ec7e (resolvconf) has been started and output is visible here. 2026-03-11 00:22:02.975747 | orchestrator | 2026-03-11 00:22:02.975877 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-11 00:22:02.975895 | orchestrator | 2026-03-11 00:22:02.975908 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:22:02.975919 | orchestrator | Wednesday 11 March 2026 00:21:53 +0000 (0:00:00.140) 0:00:00.140 ******* 2026-03-11 00:22:02.975930 | orchestrator | ok: [testbed-manager] 2026-03-11 00:22:02.975943 | orchestrator | 2026-03-11 00:22:02.975954 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-11 00:22:02.975966 | orchestrator | Wednesday 11 March 2026 00:21:57 +0000 (0:00:03.630) 0:00:03.770 ******* 2026-03-11 00:22:02.975977 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:22:02.975989 | orchestrator | 2026-03-11 00:22:02.976000 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-11 00:22:02.976011 | orchestrator | Wednesday 11 March 2026 00:21:57 +0000 (0:00:00.061) 0:00:03.832 ******* 2026-03-11 00:22:02.976022 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-11 00:22:02.976035 | orchestrator | 2026-03-11 00:22:02.976046 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-11 00:22:02.976057 | orchestrator | Wednesday 11 March 2026 00:21:57 +0000 (0:00:00.080) 0:00:03.913 ******* 2026-03-11 00:22:02.976088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-11 00:22:02.976100 | orchestrator | 2026-03-11 00:22:02.976111 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-11 00:22:02.976122 | orchestrator | Wednesday 11 March 2026 00:21:57 +0000 (0:00:00.087) 0:00:04.000 ******* 2026-03-11 00:22:02.976133 | orchestrator | ok: [testbed-manager] 2026-03-11 00:22:02.976144 | orchestrator | 2026-03-11 00:22:02.976155 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-11 00:22:02.976166 | orchestrator | Wednesday 11 March 2026 00:21:58 +0000 (0:00:01.017) 0:00:05.018 ******* 2026-03-11 00:22:02.976177 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:22:02.976188 | orchestrator | 2026-03-11 00:22:02.976199 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-11 00:22:02.976210 | orchestrator | Wednesday 11 March 2026 00:21:58 +0000 (0:00:00.063) 0:00:05.082 ******* 2026-03-11 00:22:02.976221 | orchestrator | ok: [testbed-manager] 2026-03-11 00:22:02.976257 | orchestrator | 2026-03-11 00:22:02.976294 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-11 00:22:02.976307 | orchestrator | Wednesday 11 March 2026 00:21:59 +0000 (0:00:00.478) 0:00:05.560 ******* 2026-03-11 00:22:02.976320 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:22:02.976333 | orchestrator | 2026-03-11 00:22:02.976346 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-11 00:22:02.976359 | orchestrator | Wednesday 11 March 2026 00:21:59 +0000 (0:00:00.060) 0:00:05.621 ******* 2026-03-11 00:22:02.976370 | orchestrator | changed: [testbed-manager] 2026-03-11 00:22:02.976381 | orchestrator | 2026-03-11 00:22:02.976392 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-11 00:22:02.976402 | orchestrator | Wednesday 11 March 2026 00:21:59 +0000 (0:00:00.519) 0:00:06.140 ******* 2026-03-11 00:22:02.976413 | orchestrator | changed: [testbed-manager] 2026-03-11 00:22:02.976424 | orchestrator | 2026-03-11 00:22:02.976435 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-11 00:22:02.976446 | orchestrator | Wednesday 11 March 2026 00:22:00 +0000 (0:00:00.943) 0:00:07.084 ******* 2026-03-11 00:22:02.976456 | orchestrator | ok: [testbed-manager] 2026-03-11 00:22:02.976468 | orchestrator | 2026-03-11 00:22:02.976479 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-11 00:22:02.976490 | orchestrator | Wednesday 11 March 2026 00:22:01 +0000 (0:00:00.932) 0:00:08.017 ******* 2026-03-11 00:22:02.976501 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-11 00:22:02.976512 | orchestrator | 2026-03-11 00:22:02.976523 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-11 00:22:02.976534 | orchestrator | Wednesday 11 March 2026 00:22:01 +0000 (0:00:00.074) 0:00:08.091 ******* 2026-03-11 00:22:02.976544 | orchestrator | changed: [testbed-manager] 2026-03-11 00:22:02.976555 | orchestrator | 2026-03-11 00:22:02.976566 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:22:02.976578 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-11 00:22:02.976588 | orchestrator | 2026-03-11 00:22:02.976599 | orchestrator | 2026-03-11 00:22:02.976610 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:22:02.976621 | orchestrator | Wednesday 11 March 2026 00:22:02 +0000 (0:00:01.096) 0:00:09.188 ******* 2026-03-11 00:22:02.976632 | orchestrator | =============================================================================== 2026-03-11 00:22:02.976643 | orchestrator | Gathering Facts --------------------------------------------------------- 3.63s 2026-03-11 00:22:02.976653 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.10s 2026-03-11 00:22:02.976664 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.02s 2026-03-11 00:22:02.976675 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.94s 2026-03-11 00:22:02.976686 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.93s 2026-03-11 00:22:02.976697 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2026-03-11 00:22:02.976727 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2026-03-11 00:22:02.976739 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-03-11 00:22:02.976750 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-03-11 00:22:02.976761 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-03-11 00:22:02.976772 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-11 00:22:02.976783 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-03-11 00:22:02.976802 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.06s 2026-03-11 00:22:03.236158 | orchestrator | + osism apply sshconfig 2026-03-11 00:22:15.137321 | orchestrator | 2026-03-11 00:22:15 | INFO  | Task c0c8b37c-00b4-4125-a611-c0b09528b833 (sshconfig) was prepared for execution. 2026-03-11 00:22:15.137433 | orchestrator | 2026-03-11 00:22:15 | INFO  | It takes a moment until task c0c8b37c-00b4-4125-a611-c0b09528b833 (sshconfig) has been started and output is visible here. 2026-03-11 00:22:25.678375 | orchestrator | 2026-03-11 00:22:25.678496 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-11 00:22:25.678513 | orchestrator | 2026-03-11 00:22:25.678548 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-11 00:22:25.678561 | orchestrator | Wednesday 11 March 2026 00:22:18 +0000 (0:00:00.113) 0:00:00.113 ******* 2026-03-11 00:22:25.678572 | orchestrator | ok: [testbed-manager] 2026-03-11 00:22:25.678584 | orchestrator | 2026-03-11 00:22:25.678595 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-11 00:22:25.678606 | orchestrator | Wednesday 11 March 2026 00:22:19 +0000 (0:00:00.471) 0:00:00.585 ******* 2026-03-11 00:22:25.678618 | orchestrator | changed: [testbed-manager] 2026-03-11 00:22:25.678630 | orchestrator | 2026-03-11 00:22:25.678640 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-11 00:22:25.678651 | orchestrator | Wednesday 11 March 2026 00:22:19 +0000 (0:00:00.447) 0:00:01.032 ******* 2026-03-11 00:22:25.678663 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-11 00:22:25.678674 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-11 00:22:25.678686 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-11 00:22:25.678697 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-11 00:22:25.678707 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-11 00:22:25.678718 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-11 00:22:25.678729 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-11 00:22:25.678740 | orchestrator | 2026-03-11 00:22:25.678751 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-11 00:22:25.678762 | orchestrator | Wednesday 11 March 2026 00:22:24 +0000 (0:00:05.235) 0:00:06.268 ******* 2026-03-11 00:22:25.678773 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:22:25.678784 | orchestrator | 2026-03-11 00:22:25.678795 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-11 00:22:25.678806 | orchestrator | Wednesday 11 March 2026 00:22:24 +0000 (0:00:00.071) 0:00:06.339 ******* 2026-03-11 00:22:25.678816 | orchestrator | changed: [testbed-manager] 2026-03-11 00:22:25.678827 | orchestrator | 2026-03-11 00:22:25.678838 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:22:25.678850 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:22:25.678864 | orchestrator | 2026-03-11 00:22:25.678876 | orchestrator | 2026-03-11 00:22:25.678889 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:22:25.678901 | orchestrator | Wednesday 11 March 2026 00:22:25 +0000 (0:00:00.543) 0:00:06.883 ******* 2026-03-11 00:22:25.678914 | orchestrator | =============================================================================== 2026-03-11 00:22:25.678926 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.24s 2026-03-11 00:22:25.678938 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2026-03-11 00:22:25.678950 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.47s 2026-03-11 00:22:25.678963 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.45s 2026-03-11 00:22:25.678975 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-03-11 00:22:25.939686 | orchestrator | + osism apply known-hosts 2026-03-11 00:22:37.869907 | orchestrator | 2026-03-11 00:22:37 | INFO  | Task 6805a22f-708f-4893-bc94-8f4f0a49789d (known-hosts) was prepared for execution. 2026-03-11 00:22:37.870080 | orchestrator | 2026-03-11 00:22:37 | INFO  | It takes a moment until task 6805a22f-708f-4893-bc94-8f4f0a49789d (known-hosts) has been started and output is visible here. 2026-03-11 00:22:54.121071 | orchestrator | 2026-03-11 00:22:54.121222 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-11 00:22:54.121239 | orchestrator | 2026-03-11 00:22:54.121250 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-11 00:22:54.121262 | orchestrator | Wednesday 11 March 2026 00:22:41 +0000 (0:00:00.156) 0:00:00.156 ******* 2026-03-11 00:22:54.121272 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-11 00:22:54.121283 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-11 00:22:54.121293 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-11 00:22:54.121303 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-11 00:22:54.121313 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-11 00:22:54.121322 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-11 00:22:54.121332 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-11 00:22:54.121342 | orchestrator | 2026-03-11 00:22:54.121351 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-11 00:22:54.121362 | orchestrator | Wednesday 11 March 2026 00:22:47 +0000 (0:00:05.857) 0:00:06.014 ******* 2026-03-11 00:22:54.121373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-11 00:22:54.121385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-11 00:22:54.121395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-11 00:22:54.121405 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-11 00:22:54.121414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-11 00:22:54.121434 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-11 00:22:54.121444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-11 00:22:54.121454 | orchestrator | 2026-03-11 00:22:54.121464 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:22:54.121473 | orchestrator | Wednesday 11 March 2026 00:22:47 +0000 (0:00:00.161) 0:00:06.176 ******* 2026-03-11 00:22:54.121483 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEXh0Px35cECmH5zo3qKBFyflop2vZG6P0ILUWBouo0QqIWeh8qQoyPEKHlikYMafBvB83Ao9U6Z9rWy4lAV8nY=) 2026-03-11 00:22:54.121503 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZkIHdHaM4AEA8cdv4nCWZR56OHelSWpKG+CcvGmRv6TjdP0hDQN/Ip4Xo5iJX8IsNuJ2tbnlSCsNpCkCWXTKxnDr3gui9rI/Q10Mx9Q73AItmF5GMLCyVleJBuA/WI5zH9Dls1OhOlxrxeSHoxzNwIHk7W9pUtDHw3SO/9WEHKcyMKDdGLyQqqQl/30IOxggEVD9HeeTcpeeNiJNdI4uNfHi6mAsQWdnmLLVkNoZanqqJN579SCzygeF3hKwE8NBMYeRBWSDyoqM6X0UKxhlERWK8FSZyUURpUhSvlMeMzMkN1r1VvMWkYN4qEgYf3ib+S6lrZ2RCO+cKO8/4j6Dcs498yq9lBgRDXVvVK9wNUMTOvnaQnqSS7kOU0n4q8Vb210KecGwzAZL9aEj44RdIp0J33cVVSe1Ls4r1XBBr6ogTdo76lUzAdAv2FVA9dly/eBsKp3b+FYdQWpykpNhnFLjIw/Mo6tI0+j+2sHrUn3JOGyM6wAXl02qENx/zG4k=) 2026-03-11 00:22:54.121536 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKB6B/6TWRbn4oS7ln3pEW/OIW2Ed5GJ0gA6rJtg3DoA) 2026-03-11 00:22:54.121549 | orchestrator | 2026-03-11 00:22:54.121558 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:22:54.121568 | orchestrator | Wednesday 11 March 2026 00:22:49 +0000 (0:00:01.147) 0:00:07.324 ******* 2026-03-11 00:22:54.121578 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB4AvyxgxVHWRBBhmfvRv2Rip2HbFthDeSANZbX9mpxz) 2026-03-11 00:22:54.121620 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCy5XEbE9LQhH+1ifn2HsejRsFTsPw1zzyktPpmxvN0YXRRYRCLjLTkGh0895E175ko26/AWTfSGDwOXPA8TYyE1woaUbW3vGRo8yeyxFK53ZZum3hdx+Xoz8VwULqPXTtfRBQkgJowu2fHzJ/ywBB1yHDYEbkWuM3QOCoX8jgNoDUoularQ2o8utoC5u75hQWp5yXUfUucDf5CAurPCEYLMBFMEQNqzcgSQaiHLqcQoZLawOMj5MRtY5Dqn+dAAs+CNbNDRtCu4fwgaHEj5q2alY7GVn3Rz7RfAXAlSCV4jgXc+MSOfUxJzVFhNIp2ELP7osz0FF5yaQsmCKHbTkBFNZrUEIO4HeaBrakngoNz2FkDNZiX+YIjEPSivf2goDclbnsUCTbuqEnRLpfAZKu+J7pnncKq7eb9R/PB/x58L3k6w0t1sFLvHiR40Y3YboqKCgH3AfH3NG5vVPgYOGF35yKrqtANMis85mkTxtUt7Swbs3kuL1LB1U8I80A01K0=) 2026-03-11 00:22:54.121633 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE8Thnsrq3J8lsZ9foAUzqrkwFPhlBCwkzyO+44y335xoZ2vS8rtyGVE4zHfVTGnYEA9va1QjdLfC7+c0k9ClwU=) 2026-03-11 00:22:54.121645 | orchestrator | 2026-03-11 00:22:54.121655 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:22:54.121666 | orchestrator | Wednesday 11 March 2026 00:22:50 +0000 (0:00:00.998) 0:00:08.323 ******* 2026-03-11 00:22:54.121677 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCys19Cxm6ssrkE2UKh7lh2Xoh5w7IfmXn9YfD6rLIk3zRH2QmopIk6iVNBKN+8JJdYIlGBPQ6//3BTR4j/j0w9r5pCgjDDxYOj8dlcQG1dnBWtgRl4XvojxUF4vd7CveZbXxF14nRbZJViqVXzZOiGpAlA9NgYQvT7blKre+CAlru39DR2E6UCxkE3y1rOeIDobWPOwf7AFpnI0UEOWSpj2yhalfqUc9i8tXycQ8/0JrCBRV7vpnWkuxkHwaRe7XoU4XNEdF5gFHvT9FyYT0ghDV7QxU9b7WoHVyRlIV2sK5cN+EydKBqr3j2zBPGrefxfNgMzLdHprc3np1rnzSA75fmmXZ4YzYIK2bLID+uAEx92WwgyUCLDQy3hL6nvYs/04WyCN7SSVBMmGaHxDHe33QRkQbGyR4E8DRJWHgyp+GkVfuQzS0vRvno6wstMTJMq/w9chKycDTk6fQ1zxnLtqRVJQcZG6NYvhM6X8f0zpr5A3jRuBw1jracBmHGfrN0=) 2026-03-11 00:22:54.121689 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI/T/bn1MMUoj5skv93eQPUSPX+BsjP90X+1biDKVPZFHp9iRW4mLvO/x/BIr9ToM3dug7owpIdzYiaZbcVXlTg=) 2026-03-11 00:22:54.121700 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmZA60FrukE1e/UlcfcJFRZS7MoC5+06AHuylFtHKOD) 2026-03-11 00:22:54.121711 | orchestrator | 2026-03-11 00:22:54.121722 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:22:54.121733 | orchestrator | Wednesday 11 March 2026 00:22:51 +0000 (0:00:01.021) 0:00:09.344 ******* 2026-03-11 00:22:54.121744 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6gFZMQ2eVFM0ZOI02KWJTVznZnJrdYd/BR8eHgjZpB9DA6grp8H9FlSV6dl0HABmAUu9tQpZ1feQQYrCaWZNrEGMb4Gx09RmH2ngh0y8IS/ithYC6kjFH+/1eUk4VC/fN5RKL4ULh2DhuEZ7mJd3ZO9kxQKrqPjgZQT4LoA6Y7MUYiC2cXYTeSSZShUHwlVMjiGFsf/IWBRDoTmfCouEaYTbTwGedzkFfP/0+kYqxNPZ1jy9R6t6rumAkVgufNhbiZuepMm5+AQYTtWQDndpE/Ou6lE8nP7NkimH4TVzw7pDQJXmkakLjwnGEDHPh4CqMEr1rko8P5GJVjei1OSdTqo3rfcVtFfkg60/uRjT2Z5Uu3Ji/E6B5vu3kV72iyBZEomwFbm/g0i6o8O8AsYee/HV5VBdhLAn3PFQ8aef51+x+g5Sc3wMFbNsmcrnUrFVTRCDirGXYYEQ6e1cl25mNP051+Zg3bYUw3kqv/GqIiIJF9B4/1XSuN3EVQnB2ep0=) 2026-03-11 00:22:54.121756 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDaIRcfGEWHiQKL3EkXhCat4cWnhHzEN9GRB1w/tmffi) 2026-03-11 00:22:54.121774 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPi7UJY03DEUbTFeOwzKsAeGnfk09UREtXROCFa4+UrQR7AheC+ltmMaXAcfK7833fo0Sg9qSagGrH+aDS7MhO4=) 2026-03-11 00:22:54.121785 | orchestrator | 2026-03-11 00:22:54.121796 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:22:54.121807 | orchestrator | Wednesday 11 March 2026 00:22:52 +0000 (0:00:01.004) 0:00:10.349 ******* 2026-03-11 00:22:54.121939 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6Ohj281MRr1InsO3yllZUOSh7K586unC1o9aRWQqCSMzvPg8V/b91oqbt5x/Tgevx+fmEItx9pTYXDJ9b6UHPPe5G5LTu/sm2KWMKoCl7o8SsWjwdb4Rv0wXP2XSBA3wCHFrXyY+F9ZSGg6c0wJ4vR0kYbfcBn+M3dVkGXE7LkOMJxBIA9utPPKO/bloqa8GXNppXCJbz2PKIJaYS/TPZLksY2cDt4lDuzlEuFIBdSS75VcjlpJvawXdKkwykSCzQyhaIP7NUIaBVAxtKPoBzdZxHdUdEeXYwZh82/pUCRakKbB4Dzir2piWISCLUfs8kFZqamxPfyUsLGQpCep1w8qx5g+7uPNmN/myhaCAO/NJXCKRyS4JapPOPIahf2Y/bazz5gMJWbBHw0MmQm8hQizWcOGALiQpeDBR/wbE0FH0pVZCHiWhLjwxj+Esr8lNOnuGKZtb/RxE2+PF+JE9xYBdtEpDD/9PZ/OlCr4v0UBYxHAGiCM4bNNdSTERsiMM=) 2026-03-11 00:22:54.121952 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFrmboOoVekwbZH7r+8/Rxmz/iHY2OWXni+zPllCXJTqYzPusp9KdEsSRS5ZaESXiFR4Cd6mUO8SxjEgKcdLbJA=) 2026-03-11 00:22:54.121962 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPSI0oII7giQANkZhuCVtynPNbgKkJgVIeaDuzGIruno) 2026-03-11 00:22:54.121972 | orchestrator | 2026-03-11 00:22:54.121982 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:22:54.121991 | orchestrator | Wednesday 11 March 2026 00:22:53 +0000 (0:00:01.028) 0:00:11.378 ******* 2026-03-11 00:22:54.122007 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDhKqhNNP4OoDPNAUly4musa3vgq1DZlIJSXOYnhwWqLa3vVD15cwij3a4hVQUvEbRV/HWA2hGQ6CXxVhFAb1eE=) 2026-03-11 00:23:04.180820 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1U9Z1BdnvHXwvKnRS0D4BhDeTSEsaAysM1SdA7dMsgA7ELhEBfCbAXJPvMqauIEfREy9TDyi8szyRym8cdoJV6Tj+ZUc6p4p5tMPrtGqT/3C4Vb0oLRoNUYRfQy3qhwvb0Hg+VbKhg3IBaS6f8TDbTjid9rTKBbt/r+G8VsZMzuTRWC2dUtijoAo0rZPTd92UO1ZxIq/NjTQ+mZuZ8efqbP4CmKlZKaKVLWgpRZGh45j56yiotZW42gjrleFqlcOtAAg+iquX2e5G+UP+XO/WdyobFo+vvJGSoRwlChjh4cjSG9BJ+qrQNEiNEWu+DhrBwyRAcdiEx3S5ZmVfbJwypSOkPeDCYPmfCYN/tDVruJymCkhjA8lO+auNLSQG7dYanCieqi4OQzpn5uFaU/6bK0tWs48SJ9GW1PDv39JdPCBwmrBOyBuV25XU58dxZB4aKd8Ace3dSh/H8DgxVeeqOeJnKTH0JlU9wyLB2VYH9LIGl8tSEeqtQRJftK4GLXE=) 2026-03-11 00:23:04.180947 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL5SQYAW7r1FPsNvH2qWiUYMjLA8QhSuaUFFjtV0NURr) 2026-03-11 00:23:04.180967 | orchestrator | 2026-03-11 00:23:04.180982 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:23:04.180994 | orchestrator | Wednesday 11 March 2026 00:22:54 +0000 (0:00:01.019) 0:00:12.397 ******* 2026-03-11 00:23:04.181007 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpFqbRHS8wkP1K7ynBjtk5sqhHK/150mSuMpzOvaVAySZcr0NlorA+4zF7IRIHbAnTeqOX5/NTYwJfBd1HyuQ9BdJ4yKN7T65CMNmhs4pCd+q985TPP+KN1Bj/y+Xrh4c8QUliu/VQBMSTiVxg8wDnUnBanEpz9GaW3aCbQJkipQDDlHqChBSs1clYpX5EyoH0BmNgQP4z1hJ81SUNwrkJNRFvUaSiifkwfGuruK4O6jc8WahPVpXjDOp+VKBtIZL/DkeUpsaJLX8/7SIpWs30Ih50VDv5S3Jpu4mKKRddLjdS+TVBhwCfiDh5R8ycafbbTNRSslSkFE3aobGBVMA1EdmcCwShpj1lRM+MOiSPEwWQIh8Uc77ELW1Qf7zCNC2fIO974VSf2ZdrSI1cN0wIV4OVWf3aHKnTDawCMQW3sxIt3SrAPF5H3qKor7YOURNuTl1kprKPCWBiVSgIKxxVH2v5jUpM9Er0hJq8QLvorPrkmpYRQNgMhogpi2x90v8=) 2026-03-11 00:23:04.181020 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKLFXJ57Vdf+LoSMVOxQuiYok6TlkAX6j+0/VsN8FqB38698VrLxIcynsQjr/sKKhiALpv9N6pdFRWaDk5m59Uw=) 2026-03-11 00:23:04.181063 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA7uHNSfm+uMNLZr+EYi5EbFbhccjfR8HUBqE1+f0i44) 2026-03-11 00:23:04.181083 | orchestrator | 2026-03-11 00:23:04.181104 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-11 00:23:04.181125 | orchestrator | Wednesday 11 March 2026 00:22:55 +0000 (0:00:00.925) 0:00:13.322 ******* 2026-03-11 00:23:04.181146 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-11 00:23:04.181218 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-11 00:23:04.181238 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-11 00:23:04.181256 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-11 00:23:04.181276 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-11 00:23:04.181294 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-11 00:23:04.181313 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-11 00:23:04.181325 | orchestrator | 2026-03-11 00:23:04.181336 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-11 00:23:04.181349 | orchestrator | Wednesday 11 March 2026 00:22:59 +0000 (0:00:04.929) 0:00:18.252 ******* 2026-03-11 00:23:04.181361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-11 00:23:04.181374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-11 00:23:04.181385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-11 00:23:04.181396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-11 00:23:04.181407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-11 00:23:04.181418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-11 00:23:04.181429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-11 00:23:04.181439 | orchestrator | 2026-03-11 00:23:04.181470 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:23:04.181482 | orchestrator | Wednesday 11 March 2026 00:23:00 +0000 (0:00:00.161) 0:00:18.414 ******* 2026-03-11 00:23:04.181494 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKB6B/6TWRbn4oS7ln3pEW/OIW2Ed5GJ0gA6rJtg3DoA) 2026-03-11 00:23:04.181509 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZkIHdHaM4AEA8cdv4nCWZR56OHelSWpKG+CcvGmRv6TjdP0hDQN/Ip4Xo5iJX8IsNuJ2tbnlSCsNpCkCWXTKxnDr3gui9rI/Q10Mx9Q73AItmF5GMLCyVleJBuA/WI5zH9Dls1OhOlxrxeSHoxzNwIHk7W9pUtDHw3SO/9WEHKcyMKDdGLyQqqQl/30IOxggEVD9HeeTcpeeNiJNdI4uNfHi6mAsQWdnmLLVkNoZanqqJN579SCzygeF3hKwE8NBMYeRBWSDyoqM6X0UKxhlERWK8FSZyUURpUhSvlMeMzMkN1r1VvMWkYN4qEgYf3ib+S6lrZ2RCO+cKO8/4j6Dcs498yq9lBgRDXVvVK9wNUMTOvnaQnqSS7kOU0n4q8Vb210KecGwzAZL9aEj44RdIp0J33cVVSe1Ls4r1XBBr6ogTdo76lUzAdAv2FVA9dly/eBsKp3b+FYdQWpykpNhnFLjIw/Mo6tI0+j+2sHrUn3JOGyM6wAXl02qENx/zG4k=) 2026-03-11 00:23:04.181540 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEXh0Px35cECmH5zo3qKBFyflop2vZG6P0ILUWBouo0QqIWeh8qQoyPEKHlikYMafBvB83Ao9U6Z9rWy4lAV8nY=) 2026-03-11 00:23:04.181564 | orchestrator | 2026-03-11 00:23:04.181575 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:23:04.181586 | orchestrator | Wednesday 11 March 2026 00:23:01 +0000 (0:00:00.970) 0:00:19.384 ******* 2026-03-11 00:23:04.181603 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCy5XEbE9LQhH+1ifn2HsejRsFTsPw1zzyktPpmxvN0YXRRYRCLjLTkGh0895E175ko26/AWTfSGDwOXPA8TYyE1woaUbW3vGRo8yeyxFK53ZZum3hdx+Xoz8VwULqPXTtfRBQkgJowu2fHzJ/ywBB1yHDYEbkWuM3QOCoX8jgNoDUoularQ2o8utoC5u75hQWp5yXUfUucDf5CAurPCEYLMBFMEQNqzcgSQaiHLqcQoZLawOMj5MRtY5Dqn+dAAs+CNbNDRtCu4fwgaHEj5q2alY7GVn3Rz7RfAXAlSCV4jgXc+MSOfUxJzVFhNIp2ELP7osz0FF5yaQsmCKHbTkBFNZrUEIO4HeaBrakngoNz2FkDNZiX+YIjEPSivf2goDclbnsUCTbuqEnRLpfAZKu+J7pnncKq7eb9R/PB/x58L3k6w0t1sFLvHiR40Y3YboqKCgH3AfH3NG5vVPgYOGF35yKrqtANMis85mkTxtUt7Swbs3kuL1LB1U8I80A01K0=) 2026-03-11 00:23:04.181615 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE8Thnsrq3J8lsZ9foAUzqrkwFPhlBCwkzyO+44y335xoZ2vS8rtyGVE4zHfVTGnYEA9va1QjdLfC7+c0k9ClwU=) 2026-03-11 00:23:04.181627 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB4AvyxgxVHWRBBhmfvRv2Rip2HbFthDeSANZbX9mpxz) 2026-03-11 00:23:04.181637 | orchestrator | 2026-03-11 00:23:04.181648 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:23:04.181659 | orchestrator | Wednesday 11 March 2026 00:23:02 +0000 (0:00:01.023) 0:00:20.408 ******* 2026-03-11 00:23:04.181670 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCys19Cxm6ssrkE2UKh7lh2Xoh5w7IfmXn9YfD6rLIk3zRH2QmopIk6iVNBKN+8JJdYIlGBPQ6//3BTR4j/j0w9r5pCgjDDxYOj8dlcQG1dnBWtgRl4XvojxUF4vd7CveZbXxF14nRbZJViqVXzZOiGpAlA9NgYQvT7blKre+CAlru39DR2E6UCxkE3y1rOeIDobWPOwf7AFpnI0UEOWSpj2yhalfqUc9i8tXycQ8/0JrCBRV7vpnWkuxkHwaRe7XoU4XNEdF5gFHvT9FyYT0ghDV7QxU9b7WoHVyRlIV2sK5cN+EydKBqr3j2zBPGrefxfNgMzLdHprc3np1rnzSA75fmmXZ4YzYIK2bLID+uAEx92WwgyUCLDQy3hL6nvYs/04WyCN7SSVBMmGaHxDHe33QRkQbGyR4E8DRJWHgyp+GkVfuQzS0vRvno6wstMTJMq/w9chKycDTk6fQ1zxnLtqRVJQcZG6NYvhM6X8f0zpr5A3jRuBw1jracBmHGfrN0=) 2026-03-11 00:23:04.181681 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI/T/bn1MMUoj5skv93eQPUSPX+BsjP90X+1biDKVPZFHp9iRW4mLvO/x/BIr9ToM3dug7owpIdzYiaZbcVXlTg=) 2026-03-11 00:23:04.181692 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmZA60FrukE1e/UlcfcJFRZS7MoC5+06AHuylFtHKOD) 2026-03-11 00:23:04.181703 | orchestrator | 2026-03-11 00:23:04.181714 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:23:04.181725 | orchestrator | Wednesday 11 March 2026 00:23:03 +0000 (0:00:01.016) 0:00:21.424 ******* 2026-03-11 00:23:04.181736 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDaIRcfGEWHiQKL3EkXhCat4cWnhHzEN9GRB1w/tmffi) 2026-03-11 00:23:04.181762 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6gFZMQ2eVFM0ZOI02KWJTVznZnJrdYd/BR8eHgjZpB9DA6grp8H9FlSV6dl0HABmAUu9tQpZ1feQQYrCaWZNrEGMb4Gx09RmH2ngh0y8IS/ithYC6kjFH+/1eUk4VC/fN5RKL4ULh2DhuEZ7mJd3ZO9kxQKrqPjgZQT4LoA6Y7MUYiC2cXYTeSSZShUHwlVMjiGFsf/IWBRDoTmfCouEaYTbTwGedzkFfP/0+kYqxNPZ1jy9R6t6rumAkVgufNhbiZuepMm5+AQYTtWQDndpE/Ou6lE8nP7NkimH4TVzw7pDQJXmkakLjwnGEDHPh4CqMEr1rko8P5GJVjei1OSdTqo3rfcVtFfkg60/uRjT2Z5Uu3Ji/E6B5vu3kV72iyBZEomwFbm/g0i6o8O8AsYee/HV5VBdhLAn3PFQ8aef51+x+g5Sc3wMFbNsmcrnUrFVTRCDirGXYYEQ6e1cl25mNP051+Zg3bYUw3kqv/GqIiIJF9B4/1XSuN3EVQnB2ep0=) 2026-03-11 00:23:08.166233 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPi7UJY03DEUbTFeOwzKsAeGnfk09UREtXROCFa4+UrQR7AheC+ltmMaXAcfK7833fo0Sg9qSagGrH+aDS7MhO4=) 2026-03-11 00:23:08.166431 | orchestrator | 2026-03-11 00:23:08.166463 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:23:08.166484 | orchestrator | Wednesday 11 March 2026 00:23:04 +0000 (0:00:01.033) 0:00:22.458 ******* 2026-03-11 00:23:08.166503 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFrmboOoVekwbZH7r+8/Rxmz/iHY2OWXni+zPllCXJTqYzPusp9KdEsSRS5ZaESXiFR4Cd6mUO8SxjEgKcdLbJA=) 2026-03-11 00:23:08.166524 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6Ohj281MRr1InsO3yllZUOSh7K586unC1o9aRWQqCSMzvPg8V/b91oqbt5x/Tgevx+fmEItx9pTYXDJ9b6UHPPe5G5LTu/sm2KWMKoCl7o8SsWjwdb4Rv0wXP2XSBA3wCHFrXyY+F9ZSGg6c0wJ4vR0kYbfcBn+M3dVkGXE7LkOMJxBIA9utPPKO/bloqa8GXNppXCJbz2PKIJaYS/TPZLksY2cDt4lDuzlEuFIBdSS75VcjlpJvawXdKkwykSCzQyhaIP7NUIaBVAxtKPoBzdZxHdUdEeXYwZh82/pUCRakKbB4Dzir2piWISCLUfs8kFZqamxPfyUsLGQpCep1w8qx5g+7uPNmN/myhaCAO/NJXCKRyS4JapPOPIahf2Y/bazz5gMJWbBHw0MmQm8hQizWcOGALiQpeDBR/wbE0FH0pVZCHiWhLjwxj+Esr8lNOnuGKZtb/RxE2+PF+JE9xYBdtEpDD/9PZ/OlCr4v0UBYxHAGiCM4bNNdSTERsiMM=) 2026-03-11 00:23:08.166548 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPSI0oII7giQANkZhuCVtynPNbgKkJgVIeaDuzGIruno) 2026-03-11 00:23:08.166569 | orchestrator | 2026-03-11 00:23:08.166589 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:23:08.166608 | orchestrator | Wednesday 11 March 2026 00:23:05 +0000 (0:00:00.992) 0:00:23.450 ******* 2026-03-11 00:23:08.166627 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1U9Z1BdnvHXwvKnRS0D4BhDeTSEsaAysM1SdA7dMsgA7ELhEBfCbAXJPvMqauIEfREy9TDyi8szyRym8cdoJV6Tj+ZUc6p4p5tMPrtGqT/3C4Vb0oLRoNUYRfQy3qhwvb0Hg+VbKhg3IBaS6f8TDbTjid9rTKBbt/r+G8VsZMzuTRWC2dUtijoAo0rZPTd92UO1ZxIq/NjTQ+mZuZ8efqbP4CmKlZKaKVLWgpRZGh45j56yiotZW42gjrleFqlcOtAAg+iquX2e5G+UP+XO/WdyobFo+vvJGSoRwlChjh4cjSG9BJ+qrQNEiNEWu+DhrBwyRAcdiEx3S5ZmVfbJwypSOkPeDCYPmfCYN/tDVruJymCkhjA8lO+auNLSQG7dYanCieqi4OQzpn5uFaU/6bK0tWs48SJ9GW1PDv39JdPCBwmrBOyBuV25XU58dxZB4aKd8Ace3dSh/H8DgxVeeqOeJnKTH0JlU9wyLB2VYH9LIGl8tSEeqtQRJftK4GLXE=) 2026-03-11 00:23:08.166642 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDhKqhNNP4OoDPNAUly4musa3vgq1DZlIJSXOYnhwWqLa3vVD15cwij3a4hVQUvEbRV/HWA2hGQ6CXxVhFAb1eE=) 2026-03-11 00:23:08.166656 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL5SQYAW7r1FPsNvH2qWiUYMjLA8QhSuaUFFjtV0NURr) 2026-03-11 00:23:08.166669 | orchestrator | 2026-03-11 00:23:08.166681 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:23:08.166694 | orchestrator | Wednesday 11 March 2026 00:23:06 +0000 (0:00:00.951) 0:00:24.402 ******* 2026-03-11 00:23:08.166707 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKLFXJ57Vdf+LoSMVOxQuiYok6TlkAX6j+0/VsN8FqB38698VrLxIcynsQjr/sKKhiALpv9N6pdFRWaDk5m59Uw=) 2026-03-11 00:23:08.166739 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpFqbRHS8wkP1K7ynBjtk5sqhHK/150mSuMpzOvaVAySZcr0NlorA+4zF7IRIHbAnTeqOX5/NTYwJfBd1HyuQ9BdJ4yKN7T65CMNmhs4pCd+q985TPP+KN1Bj/y+Xrh4c8QUliu/VQBMSTiVxg8wDnUnBanEpz9GaW3aCbQJkipQDDlHqChBSs1clYpX5EyoH0BmNgQP4z1hJ81SUNwrkJNRFvUaSiifkwfGuruK4O6jc8WahPVpXjDOp+VKBtIZL/DkeUpsaJLX8/7SIpWs30Ih50VDv5S3Jpu4mKKRddLjdS+TVBhwCfiDh5R8ycafbbTNRSslSkFE3aobGBVMA1EdmcCwShpj1lRM+MOiSPEwWQIh8Uc77ELW1Qf7zCNC2fIO974VSf2ZdrSI1cN0wIV4OVWf3aHKnTDawCMQW3sxIt3SrAPF5H3qKor7YOURNuTl1kprKPCWBiVSgIKxxVH2v5jUpM9Er0hJq8QLvorPrkmpYRQNgMhogpi2x90v8=) 2026-03-11 00:23:08.166753 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA7uHNSfm+uMNLZr+EYi5EbFbhccjfR8HUBqE1+f0i44) 2026-03-11 00:23:08.166765 | orchestrator | 2026-03-11 00:23:08.166778 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-11 00:23:08.166805 | orchestrator | Wednesday 11 March 2026 00:23:07 +0000 (0:00:00.971) 0:00:25.373 ******* 2026-03-11 00:23:08.166820 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-11 00:23:08.166832 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-11 00:23:08.166846 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-11 00:23:08.166859 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-11 00:23:08.166891 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-11 00:23:08.166903 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-11 00:23:08.166914 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-11 00:23:08.166925 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:23:08.166936 | orchestrator | 2026-03-11 00:23:08.166947 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-11 00:23:08.166958 | orchestrator | Wednesday 11 March 2026 00:23:07 +0000 (0:00:00.151) 0:00:25.525 ******* 2026-03-11 00:23:08.166969 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:23:08.166980 | orchestrator | 2026-03-11 00:23:08.166990 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-11 00:23:08.167001 | orchestrator | Wednesday 11 March 2026 00:23:07 +0000 (0:00:00.051) 0:00:25.576 ******* 2026-03-11 00:23:08.167012 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:23:08.167022 | orchestrator | 2026-03-11 00:23:08.167033 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-11 00:23:08.167044 | orchestrator | Wednesday 11 March 2026 00:23:07 +0000 (0:00:00.042) 0:00:25.619 ******* 2026-03-11 00:23:08.167055 | orchestrator | changed: [testbed-manager] 2026-03-11 00:23:08.167066 | orchestrator | 2026-03-11 00:23:08.167076 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:23:08.167087 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-11 00:23:08.167099 | orchestrator | 2026-03-11 00:23:08.167110 | orchestrator | 2026-03-11 00:23:08.167121 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:23:08.167132 | orchestrator | Wednesday 11 March 2026 00:23:07 +0000 (0:00:00.648) 0:00:26.268 ******* 2026-03-11 00:23:08.167175 | orchestrator | =============================================================================== 2026-03-11 00:23:08.167194 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.86s 2026-03-11 00:23:08.167205 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 4.93s 2026-03-11 00:23:08.167217 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-03-11 00:23:08.167228 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-11 00:23:08.167239 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-11 00:23:08.167250 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-11 00:23:08.167260 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-11 00:23:08.167271 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-11 00:23:08.167282 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-11 00:23:08.167293 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-11 00:23:08.167304 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-11 00:23:08.167318 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-03-11 00:23:08.167337 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-03-11 00:23:08.167354 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-03-11 00:23:08.167372 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-03-11 00:23:08.167413 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-03-11 00:23:08.167432 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.65s 2026-03-11 00:23:08.167448 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2026-03-11 00:23:08.167465 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-03-11 00:23:08.167484 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2026-03-11 00:23:08.422263 | orchestrator | + osism apply squid 2026-03-11 00:23:20.402952 | orchestrator | 2026-03-11 00:23:20 | INFO  | Task e1895d6d-bc05-4175-96cf-84a5f683e53a (squid) was prepared for execution. 2026-03-11 00:23:20.403031 | orchestrator | 2026-03-11 00:23:20 | INFO  | It takes a moment until task e1895d6d-bc05-4175-96cf-84a5f683e53a (squid) has been started and output is visible here. 2026-03-11 00:25:11.831048 | orchestrator | 2026-03-11 00:25:11.831149 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-11 00:25:11.831162 | orchestrator | 2026-03-11 00:25:11.831171 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-11 00:25:11.831180 | orchestrator | Wednesday 11 March 2026 00:23:24 +0000 (0:00:00.115) 0:00:00.115 ******* 2026-03-11 00:25:11.831189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-11 00:25:11.831198 | orchestrator | 2026-03-11 00:25:11.831206 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-11 00:25:11.831214 | orchestrator | Wednesday 11 March 2026 00:23:24 +0000 (0:00:00.067) 0:00:00.183 ******* 2026-03-11 00:25:11.831222 | orchestrator | ok: [testbed-manager] 2026-03-11 00:25:11.831232 | orchestrator | 2026-03-11 00:25:11.831240 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-11 00:25:11.831248 | orchestrator | Wednesday 11 March 2026 00:23:25 +0000 (0:00:01.117) 0:00:01.301 ******* 2026-03-11 00:25:11.831257 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-11 00:25:11.831265 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-11 00:25:11.831273 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-11 00:25:11.831281 | orchestrator | 2026-03-11 00:25:11.831289 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-11 00:25:11.831297 | orchestrator | Wednesday 11 March 2026 00:23:26 +0000 (0:00:00.997) 0:00:02.299 ******* 2026-03-11 00:25:11.831305 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-11 00:25:11.831313 | orchestrator | 2026-03-11 00:25:11.831320 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-11 00:25:11.831328 | orchestrator | Wednesday 11 March 2026 00:23:27 +0000 (0:00:00.923) 0:00:03.222 ******* 2026-03-11 00:25:11.831336 | orchestrator | ok: [testbed-manager] 2026-03-11 00:25:11.831344 | orchestrator | 2026-03-11 00:25:11.831352 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-11 00:25:11.831360 | orchestrator | Wednesday 11 March 2026 00:23:27 +0000 (0:00:00.318) 0:00:03.541 ******* 2026-03-11 00:25:11.831368 | orchestrator | changed: [testbed-manager] 2026-03-11 00:25:11.831376 | orchestrator | 2026-03-11 00:25:11.831384 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-11 00:25:11.831392 | orchestrator | Wednesday 11 March 2026 00:23:28 +0000 (0:00:00.844) 0:00:04.386 ******* 2026-03-11 00:25:11.831400 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-11 00:25:11.831410 | orchestrator | ok: [testbed-manager] 2026-03-11 00:25:11.831421 | orchestrator | 2026-03-11 00:25:11.831429 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-11 00:25:11.831437 | orchestrator | Wednesday 11 March 2026 00:23:58 +0000 (0:00:30.251) 0:00:34.637 ******* 2026-03-11 00:25:11.831470 | orchestrator | changed: [testbed-manager] 2026-03-11 00:25:11.831478 | orchestrator | 2026-03-11 00:25:11.831486 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-11 00:25:11.831494 | orchestrator | Wednesday 11 March 2026 00:24:10 +0000 (0:00:12.008) 0:00:46.646 ******* 2026-03-11 00:25:11.831502 | orchestrator | Pausing for 60 seconds 2026-03-11 00:25:11.831511 | orchestrator | changed: [testbed-manager] 2026-03-11 00:25:11.831519 | orchestrator | 2026-03-11 00:25:11.831527 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-11 00:25:11.831535 | orchestrator | Wednesday 11 March 2026 00:25:10 +0000 (0:01:00.093) 0:01:46.740 ******* 2026-03-11 00:25:11.831543 | orchestrator | ok: [testbed-manager] 2026-03-11 00:25:11.831551 | orchestrator | 2026-03-11 00:25:11.831560 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-11 00:25:11.831568 | orchestrator | Wednesday 11 March 2026 00:25:10 +0000 (0:00:00.081) 0:01:46.822 ******* 2026-03-11 00:25:11.831577 | orchestrator | changed: [testbed-manager] 2026-03-11 00:25:11.831587 | orchestrator | 2026-03-11 00:25:11.831595 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:25:11.831605 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:25:11.831614 | orchestrator | 2026-03-11 00:25:11.831622 | orchestrator | 2026-03-11 00:25:11.831632 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:25:11.831641 | orchestrator | Wednesday 11 March 2026 00:25:11 +0000 (0:00:00.665) 0:01:47.487 ******* 2026-03-11 00:25:11.831650 | orchestrator | =============================================================================== 2026-03-11 00:25:11.831659 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-03-11 00:25:11.831668 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.25s 2026-03-11 00:25:11.831677 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.01s 2026-03-11 00:25:11.831702 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.12s 2026-03-11 00:25:11.831712 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.00s 2026-03-11 00:25:11.831721 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.92s 2026-03-11 00:25:11.831730 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.84s 2026-03-11 00:25:11.831740 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.67s 2026-03-11 00:25:11.831748 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.32s 2026-03-11 00:25:11.831756 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-03-11 00:25:11.831764 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-03-11 00:25:12.174675 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-11 00:25:12.175001 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-11 00:25:12.219422 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-11 00:25:12.219514 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-11 00:25:12.223893 | orchestrator | + set -e 2026-03-11 00:25:12.223992 | orchestrator | + NAMESPACE=kolla/release 2026-03-11 00:25:12.224010 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-11 00:25:12.230832 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-11 00:25:12.297405 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-11 00:25:12.298303 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-11 00:25:24.376288 | orchestrator | 2026-03-11 00:25:24 | INFO  | Task ed777c5c-ac33-489e-9835-84d56c2dfbc0 (operator) was prepared for execution. 2026-03-11 00:25:24.376423 | orchestrator | 2026-03-11 00:25:24 | INFO  | It takes a moment until task ed777c5c-ac33-489e-9835-84d56c2dfbc0 (operator) has been started and output is visible here. 2026-03-11 00:25:40.301392 | orchestrator | 2026-03-11 00:25:40.301507 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-11 00:25:40.301525 | orchestrator | 2026-03-11 00:25:40.301536 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:25:40.301549 | orchestrator | Wednesday 11 March 2026 00:25:28 +0000 (0:00:00.140) 0:00:00.140 ******* 2026-03-11 00:25:40.301561 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:40.301575 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:25:40.301586 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:25:40.301599 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:40.301611 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:40.301623 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:25:40.301635 | orchestrator | 2026-03-11 00:25:40.301647 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-11 00:25:40.301659 | orchestrator | Wednesday 11 March 2026 00:25:31 +0000 (0:00:03.334) 0:00:03.474 ******* 2026-03-11 00:25:40.301671 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:25:40.301683 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:40.301695 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:40.301706 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:25:40.301718 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:40.301730 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:25:40.301742 | orchestrator | 2026-03-11 00:25:40.301754 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-11 00:25:40.301767 | orchestrator | 2026-03-11 00:25:40.301779 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-11 00:25:40.301910 | orchestrator | Wednesday 11 March 2026 00:25:32 +0000 (0:00:00.776) 0:00:04.250 ******* 2026-03-11 00:25:40.301965 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:25:40.301978 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:25:40.301990 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:25:40.302001 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:40.302011 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:40.302083 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:40.302098 | orchestrator | 2026-03-11 00:25:40.302114 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-11 00:25:40.302128 | orchestrator | Wednesday 11 March 2026 00:25:32 +0000 (0:00:00.139) 0:00:04.390 ******* 2026-03-11 00:25:40.302157 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:25:40.302171 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:25:40.302188 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:40.302201 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:25:40.302215 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:40.302227 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:40.302240 | orchestrator | 2026-03-11 00:25:40.302252 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-11 00:25:40.302263 | orchestrator | Wednesday 11 March 2026 00:25:32 +0000 (0:00:00.151) 0:00:04.541 ******* 2026-03-11 00:25:40.302274 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:25:40.302287 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:25:40.302300 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:25:40.302312 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:25:40.302324 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:25:40.302338 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:25:40.302350 | orchestrator | 2026-03-11 00:25:40.302363 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-11 00:25:40.302376 | orchestrator | Wednesday 11 March 2026 00:25:33 +0000 (0:00:00.766) 0:00:05.307 ******* 2026-03-11 00:25:40.302388 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:25:40.302399 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:25:40.302409 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:25:40.302421 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:25:40.302434 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:25:40.302446 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:25:40.302459 | orchestrator | 2026-03-11 00:25:40.302472 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-11 00:25:40.302509 | orchestrator | Wednesday 11 March 2026 00:25:34 +0000 (0:00:00.791) 0:00:06.099 ******* 2026-03-11 00:25:40.302523 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-11 00:25:40.302536 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-11 00:25:40.302548 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-11 00:25:40.302561 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-11 00:25:40.302573 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-11 00:25:40.302585 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-11 00:25:40.302598 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-11 00:25:40.302610 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-11 00:25:40.302623 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-11 00:25:40.302635 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-11 00:25:40.302646 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-11 00:25:40.302656 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-11 00:25:40.302667 | orchestrator | 2026-03-11 00:25:40.302678 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-11 00:25:40.302689 | orchestrator | Wednesday 11 March 2026 00:25:35 +0000 (0:00:01.242) 0:00:07.341 ******* 2026-03-11 00:25:40.302699 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:25:40.302710 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:25:40.302721 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:25:40.302732 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:25:40.302744 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:25:40.302756 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:25:40.302769 | orchestrator | 2026-03-11 00:25:40.302781 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-11 00:25:40.302795 | orchestrator | Wednesday 11 March 2026 00:25:36 +0000 (0:00:01.190) 0:00:08.532 ******* 2026-03-11 00:25:40.302808 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-11 00:25:40.302819 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-11 00:25:40.302832 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-11 00:25:40.302844 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-11 00:25:40.302877 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-11 00:25:40.302890 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-11 00:25:40.302978 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-11 00:25:40.302991 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-11 00:25:40.303003 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-11 00:25:40.303014 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-11 00:25:40.303026 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-11 00:25:40.303038 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-11 00:25:40.303049 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-11 00:25:40.303061 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-11 00:25:40.303073 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-11 00:25:40.303084 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-11 00:25:40.303095 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-11 00:25:40.303107 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-11 00:25:40.303119 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-11 00:25:40.303132 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-11 00:25:40.303154 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-11 00:25:40.303165 | orchestrator | 2026-03-11 00:25:40.303176 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-11 00:25:40.303201 | orchestrator | Wednesday 11 March 2026 00:25:38 +0000 (0:00:01.363) 0:00:09.895 ******* 2026-03-11 00:25:40.303213 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:25:40.303223 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:25:40.303234 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:25:40.303244 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:25:40.303255 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:25:40.303268 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:25:40.303280 | orchestrator | 2026-03-11 00:25:40.303291 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-11 00:25:40.303303 | orchestrator | Wednesday 11 March 2026 00:25:38 +0000 (0:00:00.167) 0:00:10.063 ******* 2026-03-11 00:25:40.303314 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:25:40.303325 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:25:40.303336 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:25:40.303348 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:25:40.303359 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:25:40.303368 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:25:40.303378 | orchestrator | 2026-03-11 00:25:40.303387 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-11 00:25:40.303400 | orchestrator | Wednesday 11 March 2026 00:25:38 +0000 (0:00:00.162) 0:00:10.225 ******* 2026-03-11 00:25:40.303411 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:25:40.303423 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:25:40.303434 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:25:40.303447 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:25:40.303458 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:25:40.303469 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:25:40.303481 | orchestrator | 2026-03-11 00:25:40.303493 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-11 00:25:40.303504 | orchestrator | Wednesday 11 March 2026 00:25:39 +0000 (0:00:00.588) 0:00:10.814 ******* 2026-03-11 00:25:40.303516 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:25:40.303527 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:25:40.303538 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:25:40.303549 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:25:40.303561 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:25:40.303573 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:25:40.303585 | orchestrator | 2026-03-11 00:25:40.303596 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-11 00:25:40.303608 | orchestrator | Wednesday 11 March 2026 00:25:39 +0000 (0:00:00.164) 0:00:10.979 ******* 2026-03-11 00:25:40.303617 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-11 00:25:40.303639 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 00:25:40.303651 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:25:40.303663 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:25:40.303674 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-11 00:25:40.303686 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:25:40.303698 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-11 00:25:40.303708 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-11 00:25:40.303721 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:25:40.303732 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:25:40.303741 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-11 00:25:40.303751 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:25:40.303760 | orchestrator | 2026-03-11 00:25:40.303773 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-11 00:25:40.303784 | orchestrator | Wednesday 11 March 2026 00:25:39 +0000 (0:00:00.710) 0:00:11.689 ******* 2026-03-11 00:25:40.303805 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:25:40.303818 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:25:40.303830 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:25:40.303841 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:25:40.303853 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:25:40.303864 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:25:40.303876 | orchestrator | 2026-03-11 00:25:40.303888 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-11 00:25:40.303919 | orchestrator | Wednesday 11 March 2026 00:25:40 +0000 (0:00:00.139) 0:00:11.829 ******* 2026-03-11 00:25:40.303931 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:25:40.303942 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:25:40.303954 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:25:40.303966 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:25:40.303991 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:25:41.641217 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:25:41.641312 | orchestrator | 2026-03-11 00:25:41.641327 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-11 00:25:41.641339 | orchestrator | Wednesday 11 March 2026 00:25:40 +0000 (0:00:00.155) 0:00:11.984 ******* 2026-03-11 00:25:41.641349 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:25:41.641359 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:25:41.641371 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:25:41.641382 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:25:41.641392 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:25:41.641403 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:25:41.641414 | orchestrator | 2026-03-11 00:25:41.641425 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-11 00:25:41.641436 | orchestrator | Wednesday 11 March 2026 00:25:40 +0000 (0:00:00.157) 0:00:12.141 ******* 2026-03-11 00:25:41.641447 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:25:41.641458 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:25:41.641469 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:25:41.641480 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:25:41.641491 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:25:41.641502 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:25:41.641512 | orchestrator | 2026-03-11 00:25:41.641523 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-11 00:25:41.641534 | orchestrator | Wednesday 11 March 2026 00:25:41 +0000 (0:00:00.717) 0:00:12.859 ******* 2026-03-11 00:25:41.641545 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:25:41.641556 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:25:41.641566 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:25:41.641578 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:25:41.641589 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:25:41.641600 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:25:41.641610 | orchestrator | 2026-03-11 00:25:41.641621 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:25:41.641656 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 00:25:41.641669 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 00:25:41.641679 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 00:25:41.641690 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 00:25:41.641701 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 00:25:41.641743 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 00:25:41.641761 | orchestrator | 2026-03-11 00:25:41.641783 | orchestrator | 2026-03-11 00:25:41.641805 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:25:41.641826 | orchestrator | Wednesday 11 March 2026 00:25:41 +0000 (0:00:00.219) 0:00:13.079 ******* 2026-03-11 00:25:41.641845 | orchestrator | =============================================================================== 2026-03-11 00:25:41.641859 | orchestrator | Gathering Facts --------------------------------------------------------- 3.33s 2026-03-11 00:25:41.641872 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.36s 2026-03-11 00:25:41.641884 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.24s 2026-03-11 00:25:41.641921 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.19s 2026-03-11 00:25:41.641933 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2026-03-11 00:25:41.641945 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2026-03-11 00:25:41.641958 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.77s 2026-03-11 00:25:41.641970 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.72s 2026-03-11 00:25:41.641982 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2026-03-11 00:25:41.641994 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2026-03-11 00:25:41.642006 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2026-03-11 00:25:41.642110 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-03-11 00:25:41.642128 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-03-11 00:25:41.642141 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-03-11 00:25:41.642152 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-03-11 00:25:41.642163 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-03-11 00:25:41.642174 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-03-11 00:25:41.642184 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2026-03-11 00:25:41.642195 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2026-03-11 00:25:41.943255 | orchestrator | + osism apply --environment custom facts 2026-03-11 00:25:43.776760 | orchestrator | 2026-03-11 00:25:43 | INFO  | Trying to run play facts in environment custom 2026-03-11 00:25:53.896100 | orchestrator | 2026-03-11 00:25:53 | INFO  | Task ac817ce8-70d7-40b0-ae31-d9cdf21da32f (facts) was prepared for execution. 2026-03-11 00:25:53.896189 | orchestrator | 2026-03-11 00:25:53 | INFO  | It takes a moment until task ac817ce8-70d7-40b0-ae31-d9cdf21da32f (facts) has been started and output is visible here. 2026-03-11 00:26:40.901036 | orchestrator | 2026-03-11 00:26:40.901214 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-11 00:26:40.901233 | orchestrator | 2026-03-11 00:26:40.901245 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-11 00:26:40.901257 | orchestrator | Wednesday 11 March 2026 00:25:57 +0000 (0:00:00.085) 0:00:00.085 ******* 2026-03-11 00:26:40.901268 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:40.901280 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:26:40.901291 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:26:40.901302 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:26:40.901313 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:26:40.901323 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:26:40.901334 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:26:40.901371 | orchestrator | 2026-03-11 00:26:40.901384 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-11 00:26:40.901395 | orchestrator | Wednesday 11 March 2026 00:25:59 +0000 (0:00:01.352) 0:00:01.437 ******* 2026-03-11 00:26:40.901405 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:40.901416 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:26:40.901427 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:26:40.901437 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:26:40.901448 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:26:40.901458 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:26:40.901469 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:26:40.901480 | orchestrator | 2026-03-11 00:26:40.901492 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-11 00:26:40.901502 | orchestrator | 2026-03-11 00:26:40.901513 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-11 00:26:40.901524 | orchestrator | Wednesday 11 March 2026 00:26:00 +0000 (0:00:01.210) 0:00:02.648 ******* 2026-03-11 00:26:40.901534 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:40.901545 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:40.901556 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:40.901566 | orchestrator | 2026-03-11 00:26:40.901577 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-11 00:26:40.901588 | orchestrator | Wednesday 11 March 2026 00:26:00 +0000 (0:00:00.101) 0:00:02.750 ******* 2026-03-11 00:26:40.901599 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:40.901610 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:40.901620 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:40.901631 | orchestrator | 2026-03-11 00:26:40.901642 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-11 00:26:40.901652 | orchestrator | Wednesday 11 March 2026 00:26:00 +0000 (0:00:00.204) 0:00:02.954 ******* 2026-03-11 00:26:40.901663 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:40.901673 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:40.901684 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:40.901694 | orchestrator | 2026-03-11 00:26:40.901705 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-11 00:26:40.901716 | orchestrator | Wednesday 11 March 2026 00:26:01 +0000 (0:00:00.234) 0:00:03.188 ******* 2026-03-11 00:26:40.901729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:26:40.901741 | orchestrator | 2026-03-11 00:26:40.901752 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-11 00:26:40.901763 | orchestrator | Wednesday 11 March 2026 00:26:01 +0000 (0:00:00.137) 0:00:03.326 ******* 2026-03-11 00:26:40.901773 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:40.901784 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:40.901795 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:40.901805 | orchestrator | 2026-03-11 00:26:40.901847 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-11 00:26:40.901860 | orchestrator | Wednesday 11 March 2026 00:26:01 +0000 (0:00:00.485) 0:00:03.811 ******* 2026-03-11 00:26:40.901870 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:26:40.901881 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:26:40.901892 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:26:40.901903 | orchestrator | 2026-03-11 00:26:40.901914 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-11 00:26:40.901924 | orchestrator | Wednesday 11 March 2026 00:26:01 +0000 (0:00:00.144) 0:00:03.956 ******* 2026-03-11 00:26:40.901935 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:26:40.901946 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:26:40.901964 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:26:40.901982 | orchestrator | 2026-03-11 00:26:40.902000 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-11 00:26:40.902107 | orchestrator | Wednesday 11 March 2026 00:26:02 +0000 (0:00:01.027) 0:00:04.984 ******* 2026-03-11 00:26:40.902121 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:40.902132 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:40.902143 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:40.902153 | orchestrator | 2026-03-11 00:26:40.902164 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-11 00:26:40.902175 | orchestrator | Wednesday 11 March 2026 00:26:03 +0000 (0:00:00.486) 0:00:05.471 ******* 2026-03-11 00:26:40.902186 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:26:40.902196 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:26:40.902207 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:26:40.902218 | orchestrator | 2026-03-11 00:26:40.902229 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-11 00:26:40.902287 | orchestrator | Wednesday 11 March 2026 00:26:04 +0000 (0:00:01.091) 0:00:06.562 ******* 2026-03-11 00:26:40.902300 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:26:40.902310 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:26:40.902321 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:26:40.902332 | orchestrator | 2026-03-11 00:26:40.902342 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-11 00:26:40.902353 | orchestrator | Wednesday 11 March 2026 00:26:21 +0000 (0:00:17.469) 0:00:24.032 ******* 2026-03-11 00:26:40.902364 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:26:40.902375 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:26:40.902385 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:26:40.902396 | orchestrator | 2026-03-11 00:26:40.902407 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-11 00:26:40.902436 | orchestrator | Wednesday 11 March 2026 00:26:22 +0000 (0:00:00.076) 0:00:24.108 ******* 2026-03-11 00:26:40.902448 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:26:40.902459 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:26:40.902469 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:26:40.902480 | orchestrator | 2026-03-11 00:26:40.902491 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-11 00:26:40.902502 | orchestrator | Wednesday 11 March 2026 00:26:31 +0000 (0:00:09.397) 0:00:33.506 ******* 2026-03-11 00:26:40.902512 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:40.902523 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:40.902534 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:40.902545 | orchestrator | 2026-03-11 00:26:40.902556 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-11 00:26:40.902566 | orchestrator | Wednesday 11 March 2026 00:26:31 +0000 (0:00:00.497) 0:00:34.004 ******* 2026-03-11 00:26:40.902577 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-11 00:26:40.902589 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-11 00:26:40.902599 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-11 00:26:40.902610 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-11 00:26:40.902626 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-11 00:26:40.902637 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-11 00:26:40.902648 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-11 00:26:40.902659 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-11 00:26:40.902670 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-11 00:26:40.902680 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-11 00:26:40.902691 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-11 00:26:40.902702 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-11 00:26:40.902712 | orchestrator | 2026-03-11 00:26:40.902723 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-11 00:26:40.902742 | orchestrator | Wednesday 11 March 2026 00:26:35 +0000 (0:00:03.668) 0:00:37.673 ******* 2026-03-11 00:26:40.902753 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:40.902763 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:40.902774 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:40.902785 | orchestrator | 2026-03-11 00:26:40.902795 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-11 00:26:40.902806 | orchestrator | 2026-03-11 00:26:40.902895 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-11 00:26:40.902910 | orchestrator | Wednesday 11 March 2026 00:26:37 +0000 (0:00:01.553) 0:00:39.226 ******* 2026-03-11 00:26:40.902921 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:40.902931 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:40.902942 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:40.902953 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:40.902964 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:40.902974 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:40.902985 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:40.902995 | orchestrator | 2026-03-11 00:26:40.903006 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:26:40.903017 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:26:40.903031 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:26:40.903052 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:26:40.903076 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:26:40.903104 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:26:40.903123 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:26:40.903142 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:26:40.903161 | orchestrator | 2026-03-11 00:26:40.903180 | orchestrator | 2026-03-11 00:26:40.903200 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:26:40.903219 | orchestrator | Wednesday 11 March 2026 00:26:40 +0000 (0:00:03.736) 0:00:42.963 ******* 2026-03-11 00:26:40.903239 | orchestrator | =============================================================================== 2026-03-11 00:26:40.903259 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.47s 2026-03-11 00:26:40.903279 | orchestrator | Install required packages (Debian) -------------------------------------- 9.40s 2026-03-11 00:26:40.903299 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.74s 2026-03-11 00:26:40.903319 | orchestrator | Copy fact files --------------------------------------------------------- 3.67s 2026-03-11 00:26:40.903338 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.55s 2026-03-11 00:26:40.903358 | orchestrator | Create custom facts directory ------------------------------------------- 1.35s 2026-03-11 00:26:40.903392 | orchestrator | Copy fact file ---------------------------------------------------------- 1.21s 2026-03-11 00:26:41.157907 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.09s 2026-03-11 00:26:41.157976 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2026-03-11 00:26:41.157982 | orchestrator | Create custom facts directory ------------------------------------------- 0.50s 2026-03-11 00:26:41.157986 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.49s 2026-03-11 00:26:41.158007 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.49s 2026-03-11 00:26:41.158011 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2026-03-11 00:26:41.158045 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2026-03-11 00:26:41.158049 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-03-11 00:26:41.158053 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-03-11 00:26:41.158058 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-03-11 00:26:41.158072 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-03-11 00:26:41.498112 | orchestrator | + osism apply bootstrap 2026-03-11 00:26:53.578941 | orchestrator | 2026-03-11 00:26:53 | INFO  | Task a52c7072-bb04-478b-8e2a-9a5e221ee507 (bootstrap) was prepared for execution. 2026-03-11 00:26:53.579123 | orchestrator | 2026-03-11 00:26:53 | INFO  | It takes a moment until task a52c7072-bb04-478b-8e2a-9a5e221ee507 (bootstrap) has been started and output is visible here. 2026-03-11 00:27:08.717836 | orchestrator | 2026-03-11 00:27:08.717935 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-11 00:27:08.717952 | orchestrator | 2026-03-11 00:27:08.717962 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-11 00:27:08.717974 | orchestrator | Wednesday 11 March 2026 00:26:57 +0000 (0:00:00.113) 0:00:00.113 ******* 2026-03-11 00:27:08.717986 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:08.717993 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:27:08.717999 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:27:08.718005 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:27:08.718072 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:27:08.718138 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:27:08.718145 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:27:08.718151 | orchestrator | 2026-03-11 00:27:08.718157 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-11 00:27:08.718163 | orchestrator | 2026-03-11 00:27:08.718169 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-11 00:27:08.718175 | orchestrator | Wednesday 11 March 2026 00:26:57 +0000 (0:00:00.185) 0:00:00.299 ******* 2026-03-11 00:27:08.718181 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:27:08.718186 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:27:08.718192 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:27:08.718197 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:08.718203 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:27:08.718208 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:27:08.718214 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:27:08.718219 | orchestrator | 2026-03-11 00:27:08.718225 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-11 00:27:08.718230 | orchestrator | 2026-03-11 00:27:08.718236 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-11 00:27:08.718241 | orchestrator | Wednesday 11 March 2026 00:27:01 +0000 (0:00:03.686) 0:00:03.985 ******* 2026-03-11 00:27:08.718248 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-11 00:27:08.718253 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-11 00:27:08.718259 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-11 00:27:08.718264 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-11 00:27:08.718270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:27:08.718275 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-11 00:27:08.718281 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:27:08.718286 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-11 00:27:08.718292 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-11 00:27:08.718319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:27:08.718324 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-11 00:27:08.718330 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-11 00:27:08.718335 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-11 00:27:08.718340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-11 00:27:08.718346 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-11 00:27:08.718351 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-11 00:27:08.718359 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:27:08.718366 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-11 00:27:08.718372 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-11 00:27:08.718379 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-11 00:27:08.718385 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-11 00:27:08.718392 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-11 00:27:08.718398 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-11 00:27:08.718404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-11 00:27:08.718410 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:27:08.718417 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-11 00:27:08.718423 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-11 00:27:08.718430 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-11 00:27:08.718436 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-11 00:27:08.718443 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-11 00:27:08.718449 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-11 00:27:08.718513 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-11 00:27:08.718520 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-11 00:27:08.718526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-11 00:27:08.718532 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-11 00:27:08.718538 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-11 00:27:08.718545 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:27:08.718551 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-11 00:27:08.718557 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-11 00:27:08.718563 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-11 00:27:08.718570 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-11 00:27:08.718576 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-11 00:27:08.718583 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:27:08.718589 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-11 00:27:08.718595 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-11 00:27:08.718602 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-11 00:27:08.718622 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-11 00:27:08.718629 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-11 00:27:08.718636 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-11 00:27:08.718642 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-11 00:27:08.718648 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-11 00:27:08.718654 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:27:08.718661 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-11 00:27:08.718668 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:27:08.718674 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-11 00:27:08.718701 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:27:08.718708 | orchestrator | 2026-03-11 00:27:08.718714 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-11 00:27:08.718720 | orchestrator | 2026-03-11 00:27:08.718726 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-11 00:27:08.718731 | orchestrator | Wednesday 11 March 2026 00:27:01 +0000 (0:00:00.370) 0:00:04.355 ******* 2026-03-11 00:27:08.718737 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:08.718742 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:27:08.718747 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:27:08.718753 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:27:08.718758 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:27:08.718764 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:27:08.718769 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:27:08.718774 | orchestrator | 2026-03-11 00:27:08.718780 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-11 00:27:08.718810 | orchestrator | Wednesday 11 March 2026 00:27:02 +0000 (0:00:01.185) 0:00:05.541 ******* 2026-03-11 00:27:08.718816 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:08.718821 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:27:08.718826 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:27:08.718832 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:27:08.718837 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:27:08.718842 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:27:08.718848 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:27:08.718853 | orchestrator | 2026-03-11 00:27:08.718859 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-11 00:27:08.718864 | orchestrator | Wednesday 11 March 2026 00:27:04 +0000 (0:00:01.168) 0:00:06.710 ******* 2026-03-11 00:27:08.718870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:27:08.718878 | orchestrator | 2026-03-11 00:27:08.718884 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-11 00:27:08.718889 | orchestrator | Wednesday 11 March 2026 00:27:04 +0000 (0:00:00.255) 0:00:06.966 ******* 2026-03-11 00:27:08.718895 | orchestrator | changed: [testbed-manager] 2026-03-11 00:27:08.718900 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:27:08.718906 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:27:08.718911 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:27:08.718917 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:27:08.718922 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:27:08.718927 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:27:08.718932 | orchestrator | 2026-03-11 00:27:08.718938 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-11 00:27:08.718943 | orchestrator | Wednesday 11 March 2026 00:27:06 +0000 (0:00:01.969) 0:00:08.935 ******* 2026-03-11 00:27:08.718949 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:27:08.718956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:27:08.718963 | orchestrator | 2026-03-11 00:27:08.718969 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-11 00:27:08.718974 | orchestrator | Wednesday 11 March 2026 00:27:06 +0000 (0:00:00.196) 0:00:09.132 ******* 2026-03-11 00:27:08.718979 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:27:08.718985 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:27:08.718990 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:27:08.718996 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:27:08.719001 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:27:08.719006 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:27:08.719012 | orchestrator | 2026-03-11 00:27:08.719022 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-11 00:27:08.719028 | orchestrator | Wednesday 11 March 2026 00:27:07 +0000 (0:00:01.013) 0:00:10.145 ******* 2026-03-11 00:27:08.719033 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:27:08.719038 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:27:08.719044 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:27:08.719049 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:27:08.719054 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:27:08.719060 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:27:08.719065 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:27:08.719070 | orchestrator | 2026-03-11 00:27:08.719076 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-11 00:27:08.719081 | orchestrator | Wednesday 11 March 2026 00:27:08 +0000 (0:00:00.590) 0:00:10.736 ******* 2026-03-11 00:27:08.719087 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:27:08.719092 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:27:08.719097 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:27:08.719103 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:27:08.719112 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:27:08.719118 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:27:08.719135 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:08.719141 | orchestrator | 2026-03-11 00:27:08.719147 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-11 00:27:08.719153 | orchestrator | Wednesday 11 March 2026 00:27:08 +0000 (0:00:00.416) 0:00:11.152 ******* 2026-03-11 00:27:08.719158 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:27:08.719171 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:27:08.719181 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:27:21.328373 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:27:21.328470 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:27:21.328481 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:27:21.328487 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:27:21.328494 | orchestrator | 2026-03-11 00:27:21.328502 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-11 00:27:21.328511 | orchestrator | Wednesday 11 March 2026 00:27:08 +0000 (0:00:00.214) 0:00:11.367 ******* 2026-03-11 00:27:21.328521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:27:21.328541 | orchestrator | 2026-03-11 00:27:21.328549 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-11 00:27:21.328557 | orchestrator | Wednesday 11 March 2026 00:27:09 +0000 (0:00:00.300) 0:00:11.668 ******* 2026-03-11 00:27:21.328564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:27:21.328571 | orchestrator | 2026-03-11 00:27:21.328577 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-11 00:27:21.328584 | orchestrator | Wednesday 11 March 2026 00:27:09 +0000 (0:00:00.283) 0:00:11.951 ******* 2026-03-11 00:27:21.328590 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:21.328598 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:27:21.328605 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:27:21.328611 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:27:21.328618 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:27:21.328624 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:27:21.328631 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:27:21.328637 | orchestrator | 2026-03-11 00:27:21.328644 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-11 00:27:21.328650 | orchestrator | Wednesday 11 March 2026 00:27:11 +0000 (0:00:01.685) 0:00:13.637 ******* 2026-03-11 00:27:21.328677 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:27:21.328684 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:27:21.328690 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:27:21.328696 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:27:21.328702 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:27:21.328708 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:27:21.328714 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:27:21.328720 | orchestrator | 2026-03-11 00:27:21.328725 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-11 00:27:21.328731 | orchestrator | Wednesday 11 March 2026 00:27:11 +0000 (0:00:00.231) 0:00:13.869 ******* 2026-03-11 00:27:21.328737 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:21.328743 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:27:21.328750 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:27:21.328756 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:27:21.328762 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:27:21.328814 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:27:21.328821 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:27:21.328825 | orchestrator | 2026-03-11 00:27:21.328829 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-11 00:27:21.328833 | orchestrator | Wednesday 11 March 2026 00:27:11 +0000 (0:00:00.577) 0:00:14.446 ******* 2026-03-11 00:27:21.328837 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:27:21.328840 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:27:21.328844 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:27:21.328848 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:27:21.328852 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:27:21.328856 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:27:21.328860 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:27:21.328864 | orchestrator | 2026-03-11 00:27:21.328868 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-11 00:27:21.328872 | orchestrator | Wednesday 11 March 2026 00:27:12 +0000 (0:00:00.301) 0:00:14.748 ******* 2026-03-11 00:27:21.328878 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:21.328884 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:27:21.328890 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:27:21.328896 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:27:21.328901 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:27:21.328907 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:27:21.328913 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:27:21.328919 | orchestrator | 2026-03-11 00:27:21.328926 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-11 00:27:21.328932 | orchestrator | Wednesday 11 March 2026 00:27:12 +0000 (0:00:00.544) 0:00:15.292 ******* 2026-03-11 00:27:21.328939 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:21.328945 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:27:21.328952 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:27:21.328958 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:27:21.328965 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:27:21.328971 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:27:21.328977 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:27:21.328983 | orchestrator | 2026-03-11 00:27:21.328988 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-11 00:27:21.328994 | orchestrator | Wednesday 11 March 2026 00:27:13 +0000 (0:00:01.107) 0:00:16.400 ******* 2026-03-11 00:27:21.329000 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:21.329005 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:27:21.329023 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:27:21.329029 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:27:21.329036 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:27:21.329042 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:27:21.329049 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:27:21.329055 | orchestrator | 2026-03-11 00:27:21.329062 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-11 00:27:21.329077 | orchestrator | Wednesday 11 March 2026 00:27:14 +0000 (0:00:01.057) 0:00:17.457 ******* 2026-03-11 00:27:21.329101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:27:21.329109 | orchestrator | 2026-03-11 00:27:21.329115 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-11 00:27:21.329122 | orchestrator | Wednesday 11 March 2026 00:27:15 +0000 (0:00:00.293) 0:00:17.751 ******* 2026-03-11 00:27:21.329128 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:27:21.329135 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:27:21.329141 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:27:21.329147 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:27:21.329153 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:27:21.329159 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:27:21.329166 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:27:21.329172 | orchestrator | 2026-03-11 00:27:21.329178 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-11 00:27:21.329184 | orchestrator | Wednesday 11 March 2026 00:27:16 +0000 (0:00:01.535) 0:00:19.287 ******* 2026-03-11 00:27:21.329190 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:21.329196 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:27:21.329202 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:27:21.329205 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:27:21.329209 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:27:21.329213 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:27:21.329217 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:27:21.329220 | orchestrator | 2026-03-11 00:27:21.329224 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-11 00:27:21.329228 | orchestrator | Wednesday 11 March 2026 00:27:16 +0000 (0:00:00.214) 0:00:19.501 ******* 2026-03-11 00:27:21.329232 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:21.329235 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:27:21.329239 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:27:21.329242 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:27:21.329246 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:27:21.329250 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:27:21.329253 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:27:21.329257 | orchestrator | 2026-03-11 00:27:21.329261 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-11 00:27:21.329264 | orchestrator | Wednesday 11 March 2026 00:27:17 +0000 (0:00:00.202) 0:00:19.703 ******* 2026-03-11 00:27:21.329268 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:21.329272 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:27:21.329275 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:27:21.329279 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:27:21.329282 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:27:21.329286 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:27:21.329290 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:27:21.329293 | orchestrator | 2026-03-11 00:27:21.329297 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-11 00:27:21.329301 | orchestrator | Wednesday 11 March 2026 00:27:17 +0000 (0:00:00.191) 0:00:19.895 ******* 2026-03-11 00:27:21.329306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:27:21.329311 | orchestrator | 2026-03-11 00:27:21.329315 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-11 00:27:21.329319 | orchestrator | Wednesday 11 March 2026 00:27:17 +0000 (0:00:00.258) 0:00:20.154 ******* 2026-03-11 00:27:21.329322 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:21.329326 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:27:21.329334 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:27:21.329337 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:27:21.329341 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:27:21.329345 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:27:21.329348 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:27:21.329352 | orchestrator | 2026-03-11 00:27:21.329356 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-11 00:27:21.329359 | orchestrator | Wednesday 11 March 2026 00:27:18 +0000 (0:00:00.655) 0:00:20.809 ******* 2026-03-11 00:27:21.329364 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:27:21.329370 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:27:21.329376 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:27:21.329382 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:27:21.329388 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:27:21.329394 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:27:21.329400 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:27:21.329406 | orchestrator | 2026-03-11 00:27:21.329413 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-11 00:27:21.329419 | orchestrator | Wednesday 11 March 2026 00:27:18 +0000 (0:00:00.210) 0:00:21.020 ******* 2026-03-11 00:27:21.329425 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:21.329431 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:27:21.329437 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:27:21.329443 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:27:21.329449 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:27:21.329454 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:27:21.329459 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:27:21.329465 | orchestrator | 2026-03-11 00:27:21.329471 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-11 00:27:21.329477 | orchestrator | Wednesday 11 March 2026 00:27:19 +0000 (0:00:01.040) 0:00:22.060 ******* 2026-03-11 00:27:21.329483 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:21.329490 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:27:21.329495 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:27:21.329501 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:27:21.329508 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:27:21.329515 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:27:21.329519 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:27:21.329523 | orchestrator | 2026-03-11 00:27:21.329529 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-11 00:27:21.329536 | orchestrator | Wednesday 11 March 2026 00:27:20 +0000 (0:00:00.654) 0:00:22.715 ******* 2026-03-11 00:27:21.329542 | orchestrator | ok: [testbed-manager] 2026-03-11 00:27:21.329549 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:27:21.329555 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:27:21.329569 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:27:21.329579 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:28:02.447997 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:28:02.448075 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:28:02.448081 | orchestrator | 2026-03-11 00:28:02.448086 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-11 00:28:02.448092 | orchestrator | Wednesday 11 March 2026 00:27:21 +0000 (0:00:01.177) 0:00:23.892 ******* 2026-03-11 00:28:02.448096 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:28:02.448101 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:28:02.448105 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:28:02.448109 | orchestrator | changed: [testbed-manager] 2026-03-11 00:28:02.448113 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:28:02.448117 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:28:02.448121 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:28:02.448125 | orchestrator | 2026-03-11 00:28:02.448129 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-11 00:28:02.448133 | orchestrator | Wednesday 11 March 2026 00:27:38 +0000 (0:00:16.934) 0:00:40.827 ******* 2026-03-11 00:28:02.448137 | orchestrator | ok: [testbed-manager] 2026-03-11 00:28:02.448156 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:28:02.448160 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:28:02.448164 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:28:02.448168 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:28:02.448171 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:28:02.448175 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:28:02.448179 | orchestrator | 2026-03-11 00:28:02.448194 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-11 00:28:02.448198 | orchestrator | Wednesday 11 March 2026 00:27:38 +0000 (0:00:00.229) 0:00:41.056 ******* 2026-03-11 00:28:02.448202 | orchestrator | ok: [testbed-manager] 2026-03-11 00:28:02.448206 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:28:02.448210 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:28:02.448213 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:28:02.448217 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:28:02.448221 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:28:02.448224 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:28:02.448228 | orchestrator | 2026-03-11 00:28:02.448232 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-11 00:28:02.448236 | orchestrator | Wednesday 11 March 2026 00:27:38 +0000 (0:00:00.195) 0:00:41.251 ******* 2026-03-11 00:28:02.448239 | orchestrator | ok: [testbed-manager] 2026-03-11 00:28:02.448243 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:28:02.448247 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:28:02.448250 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:28:02.448254 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:28:02.448258 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:28:02.448261 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:28:02.448266 | orchestrator | 2026-03-11 00:28:02.448270 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-11 00:28:02.448273 | orchestrator | Wednesday 11 March 2026 00:27:38 +0000 (0:00:00.215) 0:00:41.467 ******* 2026-03-11 00:28:02.448280 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:28:02.448286 | orchestrator | 2026-03-11 00:28:02.448290 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-11 00:28:02.448294 | orchestrator | Wednesday 11 March 2026 00:27:39 +0000 (0:00:00.270) 0:00:41.738 ******* 2026-03-11 00:28:02.448297 | orchestrator | ok: [testbed-manager] 2026-03-11 00:28:02.448301 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:28:02.448305 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:28:02.448309 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:28:02.448312 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:28:02.448316 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:28:02.448320 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:28:02.448323 | orchestrator | 2026-03-11 00:28:02.448327 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-11 00:28:02.448331 | orchestrator | Wednesday 11 March 2026 00:27:41 +0000 (0:00:01.973) 0:00:43.711 ******* 2026-03-11 00:28:02.448335 | orchestrator | changed: [testbed-manager] 2026-03-11 00:28:02.448338 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:28:02.448342 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:28:02.448346 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:28:02.448350 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:28:02.448353 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:28:02.448357 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:28:02.448361 | orchestrator | 2026-03-11 00:28:02.448365 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-11 00:28:02.448368 | orchestrator | Wednesday 11 March 2026 00:27:42 +0000 (0:00:01.148) 0:00:44.859 ******* 2026-03-11 00:28:02.448372 | orchestrator | ok: [testbed-manager] 2026-03-11 00:28:02.448376 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:28:02.448379 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:28:02.448383 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:28:02.448391 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:28:02.448394 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:28:02.448398 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:28:02.448402 | orchestrator | 2026-03-11 00:28:02.448406 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-11 00:28:02.448409 | orchestrator | Wednesday 11 March 2026 00:27:43 +0000 (0:00:00.869) 0:00:45.728 ******* 2026-03-11 00:28:02.448414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:28:02.448419 | orchestrator | 2026-03-11 00:28:02.448433 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-11 00:28:02.448438 | orchestrator | Wednesday 11 March 2026 00:27:43 +0000 (0:00:00.288) 0:00:46.017 ******* 2026-03-11 00:28:02.448441 | orchestrator | changed: [testbed-manager] 2026-03-11 00:28:02.448445 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:28:02.448449 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:28:02.448453 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:28:02.448456 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:28:02.448460 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:28:02.448464 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:28:02.448468 | orchestrator | 2026-03-11 00:28:02.448480 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-11 00:28:02.448484 | orchestrator | Wednesday 11 March 2026 00:27:44 +0000 (0:00:01.008) 0:00:47.026 ******* 2026-03-11 00:28:02.448488 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:28:02.448492 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:28:02.448496 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:28:02.448499 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:28:02.448503 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:28:02.448507 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:28:02.448510 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:28:02.448514 | orchestrator | 2026-03-11 00:28:02.448518 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-11 00:28:02.448522 | orchestrator | Wednesday 11 March 2026 00:27:44 +0000 (0:00:00.217) 0:00:47.244 ******* 2026-03-11 00:28:02.448526 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:28:02.448530 | orchestrator | 2026-03-11 00:28:02.448534 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-11 00:28:02.448537 | orchestrator | Wednesday 11 March 2026 00:27:44 +0000 (0:00:00.302) 0:00:47.546 ******* 2026-03-11 00:28:02.448541 | orchestrator | ok: [testbed-manager] 2026-03-11 00:28:02.448545 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:28:02.448548 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:28:02.448552 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:28:02.448556 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:28:02.448559 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:28:02.448563 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:28:02.448568 | orchestrator | 2026-03-11 00:28:02.448572 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-11 00:28:02.448577 | orchestrator | Wednesday 11 March 2026 00:27:46 +0000 (0:00:01.921) 0:00:49.468 ******* 2026-03-11 00:28:02.448581 | orchestrator | changed: [testbed-manager] 2026-03-11 00:28:02.448585 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:28:02.448590 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:28:02.448594 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:28:02.448598 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:28:02.448602 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:28:02.448607 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:28:02.448611 | orchestrator | 2026-03-11 00:28:02.448620 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-11 00:28:02.448624 | orchestrator | Wednesday 11 March 2026 00:27:48 +0000 (0:00:01.293) 0:00:50.761 ******* 2026-03-11 00:28:02.448629 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:28:02.448633 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:28:02.448637 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:28:02.448641 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:28:02.448644 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:28:02.448648 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:28:02.448652 | orchestrator | changed: [testbed-manager] 2026-03-11 00:28:02.448656 | orchestrator | 2026-03-11 00:28:02.448659 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-11 00:28:02.448663 | orchestrator | Wednesday 11 March 2026 00:27:59 +0000 (0:00:11.290) 0:01:02.052 ******* 2026-03-11 00:28:02.448667 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:28:02.448671 | orchestrator | ok: [testbed-manager] 2026-03-11 00:28:02.448674 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:28:02.448678 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:28:02.448682 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:28:02.448685 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:28:02.448689 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:28:02.448693 | orchestrator | 2026-03-11 00:28:02.448697 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-11 00:28:02.448700 | orchestrator | Wednesday 11 March 2026 00:28:00 +0000 (0:00:01.186) 0:01:03.238 ******* 2026-03-11 00:28:02.448704 | orchestrator | ok: [testbed-manager] 2026-03-11 00:28:02.448708 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:28:02.448712 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:28:02.448715 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:28:02.448719 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:28:02.448736 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:28:02.448740 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:28:02.448743 | orchestrator | 2026-03-11 00:28:02.448747 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-11 00:28:02.448751 | orchestrator | Wednesday 11 March 2026 00:28:01 +0000 (0:00:00.934) 0:01:04.172 ******* 2026-03-11 00:28:02.448755 | orchestrator | ok: [testbed-manager] 2026-03-11 00:28:02.448758 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:28:02.448762 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:28:02.448766 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:28:02.448769 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:28:02.448773 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:28:02.448777 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:28:02.448780 | orchestrator | 2026-03-11 00:28:02.448784 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-11 00:28:02.448788 | orchestrator | Wednesday 11 March 2026 00:28:01 +0000 (0:00:00.275) 0:01:04.448 ******* 2026-03-11 00:28:02.448792 | orchestrator | ok: [testbed-manager] 2026-03-11 00:28:02.448795 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:28:02.448799 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:28:02.448803 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:28:02.448815 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:28:02.448818 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:28:02.448822 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:28:02.448835 | orchestrator | 2026-03-11 00:28:02.448842 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-11 00:28:02.448854 | orchestrator | Wednesday 11 March 2026 00:28:02 +0000 (0:00:00.272) 0:01:04.720 ******* 2026-03-11 00:28:02.448858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:28:02.448862 | orchestrator | 2026-03-11 00:28:02.448868 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-11 00:30:23.236736 | orchestrator | Wednesday 11 March 2026 00:28:02 +0000 (0:00:00.295) 0:01:05.016 ******* 2026-03-11 00:30:23.236824 | orchestrator | ok: [testbed-manager] 2026-03-11 00:30:23.236834 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:30:23.236840 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:30:23.236846 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:30:23.236852 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:30:23.236867 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:30:23.236873 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:30:23.236886 | orchestrator | 2026-03-11 00:30:23.236893 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-11 00:30:23.236900 | orchestrator | Wednesday 11 March 2026 00:28:04 +0000 (0:00:01.849) 0:01:06.865 ******* 2026-03-11 00:30:23.236906 | orchestrator | changed: [testbed-manager] 2026-03-11 00:30:23.236913 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:30:23.236918 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:30:23.236924 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:30:23.236930 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:30:23.236935 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:30:23.236941 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:30:23.236946 | orchestrator | 2026-03-11 00:30:23.236952 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-11 00:30:23.236959 | orchestrator | Wednesday 11 March 2026 00:28:04 +0000 (0:00:00.658) 0:01:07.524 ******* 2026-03-11 00:30:23.236964 | orchestrator | ok: [testbed-manager] 2026-03-11 00:30:23.236970 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:30:23.236975 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:30:23.236981 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:30:23.236986 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:30:23.236992 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:30:23.236997 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:30:23.237003 | orchestrator | 2026-03-11 00:30:23.237008 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-11 00:30:23.237015 | orchestrator | Wednesday 11 March 2026 00:28:05 +0000 (0:00:00.242) 0:01:07.766 ******* 2026-03-11 00:30:23.237020 | orchestrator | ok: [testbed-manager] 2026-03-11 00:30:23.237026 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:30:23.237031 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:30:23.237036 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:30:23.237042 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:30:23.237047 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:30:23.237053 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:30:23.237058 | orchestrator | 2026-03-11 00:30:23.237064 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-11 00:30:23.237070 | orchestrator | Wednesday 11 March 2026 00:28:06 +0000 (0:00:01.381) 0:01:09.148 ******* 2026-03-11 00:30:23.237075 | orchestrator | changed: [testbed-manager] 2026-03-11 00:30:23.237081 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:30:23.237086 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:30:23.237091 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:30:23.237097 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:30:23.237103 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:30:23.237108 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:30:23.237114 | orchestrator | 2026-03-11 00:30:23.237119 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-11 00:30:23.237127 | orchestrator | Wednesday 11 March 2026 00:28:08 +0000 (0:00:02.009) 0:01:11.157 ******* 2026-03-11 00:30:23.237133 | orchestrator | ok: [testbed-manager] 2026-03-11 00:30:23.237139 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:30:23.237144 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:30:23.237149 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:30:23.237155 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:30:23.237160 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:30:23.237166 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:30:23.237171 | orchestrator | 2026-03-11 00:30:23.237177 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-11 00:30:23.237203 | orchestrator | Wednesday 11 March 2026 00:28:11 +0000 (0:00:02.991) 0:01:14.149 ******* 2026-03-11 00:30:23.237209 | orchestrator | ok: [testbed-manager] 2026-03-11 00:30:23.237215 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:30:23.237223 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:30:23.237235 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:30:23.237249 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:30:23.237257 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:30:23.237266 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:30:23.237274 | orchestrator | 2026-03-11 00:30:23.237283 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-11 00:30:23.237292 | orchestrator | Wednesday 11 March 2026 00:28:46 +0000 (0:00:35.364) 0:01:49.514 ******* 2026-03-11 00:30:23.237302 | orchestrator | changed: [testbed-manager] 2026-03-11 00:30:23.237311 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:30:23.237320 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:30:23.237329 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:30:23.237338 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:30:23.237347 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:30:23.237357 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:30:23.237365 | orchestrator | 2026-03-11 00:30:23.237375 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-11 00:30:23.237385 | orchestrator | Wednesday 11 March 2026 00:30:07 +0000 (0:01:21.049) 0:03:10.564 ******* 2026-03-11 00:30:23.237395 | orchestrator | ok: [testbed-manager] 2026-03-11 00:30:23.237406 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:30:23.237416 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:30:23.237425 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:30:23.237436 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:30:23.237446 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:30:23.237456 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:30:23.237463 | orchestrator | 2026-03-11 00:30:23.237469 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-11 00:30:23.237476 | orchestrator | Wednesday 11 March 2026 00:30:10 +0000 (0:00:02.125) 0:03:12.689 ******* 2026-03-11 00:30:23.237481 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:30:23.237486 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:30:23.237492 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:30:23.237497 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:30:23.237503 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:30:23.237508 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:30:23.237513 | orchestrator | changed: [testbed-manager] 2026-03-11 00:30:23.237519 | orchestrator | 2026-03-11 00:30:23.237524 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-11 00:30:23.237530 | orchestrator | Wednesday 11 March 2026 00:30:22 +0000 (0:00:11.904) 0:03:24.593 ******* 2026-03-11 00:30:23.237561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-11 00:30:23.237641 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-11 00:30:23.237654 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-11 00:30:23.237676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-11 00:30:23.237688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-11 00:30:23.237700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-11 00:30:23.237708 | orchestrator | 2026-03-11 00:30:23.237717 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-11 00:30:23.237726 | orchestrator | Wednesday 11 March 2026 00:30:22 +0000 (0:00:00.403) 0:03:24.997 ******* 2026-03-11 00:30:23.237735 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-11 00:30:23.237744 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-11 00:30:23.237753 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:30:23.237761 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-11 00:30:23.237770 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:30:23.237778 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-11 00:30:23.237787 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:30:23.237796 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:30:23.237805 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-11 00:30:23.237814 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-11 00:30:23.237822 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-11 00:30:23.237831 | orchestrator | 2026-03-11 00:30:23.237840 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-11 00:30:23.237850 | orchestrator | Wednesday 11 March 2026 00:30:23 +0000 (0:00:00.728) 0:03:25.725 ******* 2026-03-11 00:30:23.237858 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-11 00:30:23.237875 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-11 00:30:23.237885 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-11 00:30:23.237894 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-11 00:30:23.237903 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-11 00:30:23.237922 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-11 00:30:30.467821 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-11 00:30:30.467940 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-11 00:30:30.467955 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-11 00:30:30.467993 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-11 00:30:30.468006 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-11 00:30:30.468017 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:30:30.468028 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-11 00:30:30.468039 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-11 00:30:30.468048 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-11 00:30:30.468058 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-11 00:30:30.468068 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-11 00:30:30.468078 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-11 00:30:30.468088 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-11 00:30:30.468098 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-11 00:30:30.468109 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-11 00:30:30.468118 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-11 00:30:30.468128 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-11 00:30:30.468137 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-11 00:30:30.468146 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-11 00:30:30.468155 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-11 00:30:30.468165 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-11 00:30:30.468175 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-11 00:30:30.468185 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-11 00:30:30.468196 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:30:30.468206 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-11 00:30:30.468217 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-11 00:30:30.468227 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-11 00:30:30.468238 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-11 00:30:30.468247 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-11 00:30:30.468258 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-11 00:30:30.468268 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-11 00:30:30.468278 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-11 00:30:30.468289 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-11 00:30:30.468299 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-11 00:30:30.468309 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-11 00:30:30.468329 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-11 00:30:30.468340 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:30:30.468350 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:30:30.468375 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-11 00:30:30.468386 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-11 00:30:30.468396 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-11 00:30:30.468406 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-11 00:30:30.468417 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-11 00:30:30.468446 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-11 00:30:30.468457 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-11 00:30:30.468467 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-11 00:30:30.468478 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-11 00:30:30.468488 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-11 00:30:30.468498 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-11 00:30:30.468508 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-11 00:30:30.468519 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-11 00:30:30.468529 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-11 00:30:30.468539 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-11 00:30:30.468602 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-11 00:30:30.468615 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-11 00:30:30.468626 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-11 00:30:30.468636 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-11 00:30:30.468646 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-11 00:30:30.468656 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-11 00:30:30.468667 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-11 00:30:30.468677 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-11 00:30:30.468686 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-11 00:30:30.468696 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-11 00:30:30.468705 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-11 00:30:30.468714 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-11 00:30:30.468723 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-11 00:30:30.468732 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-11 00:30:30.468742 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-11 00:30:30.468752 | orchestrator | 2026-03-11 00:30:30.468762 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-11 00:30:30.468780 | orchestrator | Wednesday 11 March 2026 00:30:29 +0000 (0:00:06.071) 0:03:31.797 ******* 2026-03-11 00:30:30.468790 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-11 00:30:30.468800 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-11 00:30:30.468809 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-11 00:30:30.468819 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-11 00:30:30.468829 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-11 00:30:30.468838 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-11 00:30:30.468848 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-11 00:30:30.468859 | orchestrator | 2026-03-11 00:30:30.468869 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-11 00:30:30.468879 | orchestrator | Wednesday 11 March 2026 00:30:29 +0000 (0:00:00.592) 0:03:32.389 ******* 2026-03-11 00:30:30.468889 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:30:30.468899 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:30:30.468910 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:30:30.468919 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:30:30.468936 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:30:30.468946 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:30:30.468955 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:30:30.468966 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:30:30.468978 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-11 00:30:30.468988 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-11 00:30:30.469008 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-11 00:30:44.524171 | orchestrator | 2026-03-11 00:30:44.524267 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-11 00:30:44.524284 | orchestrator | Wednesday 11 March 2026 00:30:30 +0000 (0:00:00.643) 0:03:33.033 ******* 2026-03-11 00:30:44.524298 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:30:44.524313 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:30:44.524326 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:30:44.524340 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:30:44.524351 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:30:44.524364 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:30:44.524376 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:30:44.524389 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:30:44.524403 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-11 00:30:44.524416 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-11 00:30:44.524430 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-11 00:30:44.524443 | orchestrator | 2026-03-11 00:30:44.524457 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-11 00:30:44.524499 | orchestrator | Wednesday 11 March 2026 00:30:32 +0000 (0:00:01.642) 0:03:34.676 ******* 2026-03-11 00:30:44.524513 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-11 00:30:44.524556 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:30:44.524569 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-11 00:30:44.524582 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:30:44.524595 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-11 00:30:44.524607 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:30:44.524620 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-11 00:30:44.524632 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:30:44.524640 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-11 00:30:44.524647 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-11 00:30:44.524654 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-11 00:30:44.524661 | orchestrator | 2026-03-11 00:30:44.524669 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-11 00:30:44.524676 | orchestrator | Wednesday 11 March 2026 00:30:32 +0000 (0:00:00.574) 0:03:35.250 ******* 2026-03-11 00:30:44.524683 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:30:44.524691 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:30:44.524698 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:30:44.524705 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:30:44.524712 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:30:44.524720 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:30:44.524729 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:30:44.524737 | orchestrator | 2026-03-11 00:30:44.524746 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-11 00:30:44.524754 | orchestrator | Wednesday 11 March 2026 00:30:32 +0000 (0:00:00.316) 0:03:35.567 ******* 2026-03-11 00:30:44.524762 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:30:44.524772 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:30:44.524780 | orchestrator | ok: [testbed-manager] 2026-03-11 00:30:44.524788 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:30:44.524796 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:30:44.524804 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:30:44.524812 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:30:44.524821 | orchestrator | 2026-03-11 00:30:44.524829 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-11 00:30:44.524837 | orchestrator | Wednesday 11 March 2026 00:30:38 +0000 (0:00:05.327) 0:03:40.894 ******* 2026-03-11 00:30:44.524846 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-11 00:30:44.524855 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-11 00:30:44.524863 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:30:44.524871 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-11 00:30:44.524879 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:30:44.524887 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-11 00:30:44.524895 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:30:44.524904 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-11 00:30:44.524912 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:30:44.524939 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-11 00:30:44.524947 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:30:44.524955 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:30:44.524962 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-11 00:30:44.524970 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:30:44.524977 | orchestrator | 2026-03-11 00:30:44.524984 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-11 00:30:44.524999 | orchestrator | Wednesday 11 March 2026 00:30:38 +0000 (0:00:00.291) 0:03:41.185 ******* 2026-03-11 00:30:44.525007 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-11 00:30:44.525014 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-11 00:30:44.525022 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-11 00:30:44.525046 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-11 00:30:44.525053 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-11 00:30:44.525061 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-11 00:30:44.525068 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-11 00:30:44.525075 | orchestrator | 2026-03-11 00:30:44.525082 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-11 00:30:44.525089 | orchestrator | Wednesday 11 March 2026 00:30:39 +0000 (0:00:01.205) 0:03:42.391 ******* 2026-03-11 00:30:44.525099 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:30:44.525108 | orchestrator | 2026-03-11 00:30:44.525116 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-11 00:30:44.525123 | orchestrator | Wednesday 11 March 2026 00:30:40 +0000 (0:00:00.410) 0:03:42.802 ******* 2026-03-11 00:30:44.525130 | orchestrator | ok: [testbed-manager] 2026-03-11 00:30:44.525137 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:30:44.525144 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:30:44.525152 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:30:44.525159 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:30:44.525166 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:30:44.525173 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:30:44.525180 | orchestrator | 2026-03-11 00:30:44.525188 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-11 00:30:44.525195 | orchestrator | Wednesday 11 March 2026 00:30:41 +0000 (0:00:01.271) 0:03:44.073 ******* 2026-03-11 00:30:44.525207 | orchestrator | ok: [testbed-manager] 2026-03-11 00:30:44.525219 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:30:44.525231 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:30:44.525243 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:30:44.525255 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:30:44.525266 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:30:44.525277 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:30:44.525289 | orchestrator | 2026-03-11 00:30:44.525302 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-11 00:30:44.525314 | orchestrator | Wednesday 11 March 2026 00:30:42 +0000 (0:00:00.617) 0:03:44.690 ******* 2026-03-11 00:30:44.525327 | orchestrator | changed: [testbed-manager] 2026-03-11 00:30:44.525340 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:30:44.525352 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:30:44.525364 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:30:44.525372 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:30:44.525379 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:30:44.525386 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:30:44.525394 | orchestrator | 2026-03-11 00:30:44.525401 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-11 00:30:44.525408 | orchestrator | Wednesday 11 March 2026 00:30:42 +0000 (0:00:00.671) 0:03:45.361 ******* 2026-03-11 00:30:44.525415 | orchestrator | ok: [testbed-manager] 2026-03-11 00:30:44.525423 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:30:44.525431 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:30:44.525444 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:30:44.525455 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:30:44.525467 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:30:44.525479 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:30:44.525491 | orchestrator | 2026-03-11 00:30:44.525503 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-11 00:30:44.525597 | orchestrator | Wednesday 11 March 2026 00:30:43 +0000 (0:00:00.620) 0:03:45.982 ******* 2026-03-11 00:30:44.525617 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773187516.1257133, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:30:44.525634 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773187523.4147193, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:30:44.525656 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773187542.0631363, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:30:44.525682 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773187528.9419873, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:30:49.422233 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773187518.0904021, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:30:49.422346 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773187522.2301948, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:30:49.422358 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773187538.7867265, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:30:49.422386 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:30:49.422394 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:30:49.422412 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:30:49.422420 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:30:49.422447 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:30:49.422455 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:30:49.422462 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:30:49.422475 | orchestrator | 2026-03-11 00:30:49.422484 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-11 00:30:49.422504 | orchestrator | Wednesday 11 March 2026 00:30:44 +0000 (0:00:01.105) 0:03:47.088 ******* 2026-03-11 00:30:49.422554 | orchestrator | changed: [testbed-manager] 2026-03-11 00:30:49.422563 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:30:49.422569 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:30:49.422576 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:30:49.422583 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:30:49.422590 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:30:49.422597 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:30:49.422604 | orchestrator | 2026-03-11 00:30:49.422611 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-11 00:30:49.422617 | orchestrator | Wednesday 11 March 2026 00:30:45 +0000 (0:00:01.149) 0:03:48.237 ******* 2026-03-11 00:30:49.422624 | orchestrator | changed: [testbed-manager] 2026-03-11 00:30:49.422631 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:30:49.422637 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:30:49.422644 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:30:49.422651 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:30:49.422657 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:30:49.422664 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:30:49.422671 | orchestrator | 2026-03-11 00:30:49.422678 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-11 00:30:49.422684 | orchestrator | Wednesday 11 March 2026 00:30:46 +0000 (0:00:01.176) 0:03:49.413 ******* 2026-03-11 00:30:49.422691 | orchestrator | changed: [testbed-manager] 2026-03-11 00:30:49.422698 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:30:49.422705 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:30:49.422711 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:30:49.422718 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:30:49.422725 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:30:49.422731 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:30:49.422738 | orchestrator | 2026-03-11 00:30:49.422745 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-11 00:30:49.422752 | orchestrator | Wednesday 11 March 2026 00:30:47 +0000 (0:00:01.119) 0:03:50.532 ******* 2026-03-11 00:30:49.422760 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:30:49.422767 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:30:49.422774 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:30:49.422787 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:30:49.422795 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:30:49.422803 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:30:49.422815 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:30:49.422826 | orchestrator | 2026-03-11 00:30:49.422837 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-11 00:30:49.422848 | orchestrator | Wednesday 11 March 2026 00:30:48 +0000 (0:00:00.262) 0:03:50.794 ******* 2026-03-11 00:30:49.422859 | orchestrator | ok: [testbed-manager] 2026-03-11 00:30:49.422872 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:30:49.422883 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:30:49.422896 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:30:49.422919 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:30:49.422927 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:30:49.422942 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:30:49.422948 | orchestrator | 2026-03-11 00:30:49.422955 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-11 00:30:49.422962 | orchestrator | Wednesday 11 March 2026 00:30:48 +0000 (0:00:00.764) 0:03:51.559 ******* 2026-03-11 00:30:49.422971 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:30:49.422986 | orchestrator | 2026-03-11 00:30:49.422993 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-11 00:30:49.423006 | orchestrator | Wednesday 11 March 2026 00:30:49 +0000 (0:00:00.428) 0:03:51.987 ******* 2026-03-11 00:32:08.996952 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:08.997073 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:08.997088 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:08.997098 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:08.997107 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:08.997117 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:08.997126 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:08.997136 | orchestrator | 2026-03-11 00:32:08.997146 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-11 00:32:08.997157 | orchestrator | Wednesday 11 March 2026 00:30:57 +0000 (0:00:08.429) 0:04:00.416 ******* 2026-03-11 00:32:08.997166 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:08.997175 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:08.997184 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:08.997193 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:08.997202 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:08.997223 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:08.997241 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:08.997250 | orchestrator | 2026-03-11 00:32:08.997259 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-11 00:32:08.997268 | orchestrator | Wednesday 11 March 2026 00:30:59 +0000 (0:00:01.269) 0:04:01.685 ******* 2026-03-11 00:32:08.997277 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:08.997286 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:08.997295 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:08.997303 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:08.997312 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:08.997338 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:08.997347 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:08.997356 | orchestrator | 2026-03-11 00:32:08.997365 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-11 00:32:08.997374 | orchestrator | Wednesday 11 March 2026 00:31:00 +0000 (0:00:01.338) 0:04:03.024 ******* 2026-03-11 00:32:08.997382 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:08.997391 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:08.997400 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:08.997409 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:08.997417 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:08.997426 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:08.997435 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:08.997444 | orchestrator | 2026-03-11 00:32:08.997453 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-11 00:32:08.997463 | orchestrator | Wednesday 11 March 2026 00:31:00 +0000 (0:00:00.296) 0:04:03.321 ******* 2026-03-11 00:32:08.997472 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:08.997480 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:08.997489 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:08.997500 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:08.997510 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:08.997521 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:08.997531 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:08.997541 | orchestrator | 2026-03-11 00:32:08.997551 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-11 00:32:08.997562 | orchestrator | Wednesday 11 March 2026 00:31:01 +0000 (0:00:00.322) 0:04:03.643 ******* 2026-03-11 00:32:08.997572 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:08.997582 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:08.997593 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:08.997603 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:08.997639 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:08.997649 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:08.997659 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:08.997669 | orchestrator | 2026-03-11 00:32:08.997680 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-11 00:32:08.997690 | orchestrator | Wednesday 11 March 2026 00:31:01 +0000 (0:00:00.280) 0:04:03.923 ******* 2026-03-11 00:32:08.997700 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:08.997710 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:08.997721 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:08.997730 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:08.997740 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:08.997750 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:08.997760 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:08.997770 | orchestrator | 2026-03-11 00:32:08.997780 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-11 00:32:08.997790 | orchestrator | Wednesday 11 March 2026 00:31:07 +0000 (0:00:05.772) 0:04:09.696 ******* 2026-03-11 00:32:08.997804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:32:08.997816 | orchestrator | 2026-03-11 00:32:08.997825 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-11 00:32:08.997834 | orchestrator | Wednesday 11 March 2026 00:31:07 +0000 (0:00:00.379) 0:04:10.075 ******* 2026-03-11 00:32:08.997843 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-11 00:32:08.997851 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-11 00:32:08.997861 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-11 00:32:08.997870 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-11 00:32:08.997878 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:08.997905 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:32:08.997914 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-11 00:32:08.997923 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-11 00:32:08.997932 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-11 00:32:08.997940 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-11 00:32:08.997949 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:32:08.997958 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-11 00:32:08.997967 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-11 00:32:08.997976 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:32:08.997985 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-11 00:32:08.997993 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-11 00:32:08.998068 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:32:08.998080 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:32:08.998089 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-11 00:32:08.998098 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-11 00:32:08.998107 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:32:08.998115 | orchestrator | 2026-03-11 00:32:08.998124 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-11 00:32:08.998133 | orchestrator | Wednesday 11 March 2026 00:31:07 +0000 (0:00:00.345) 0:04:10.420 ******* 2026-03-11 00:32:08.998142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:32:08.998151 | orchestrator | 2026-03-11 00:32:08.998159 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-11 00:32:08.998168 | orchestrator | Wednesday 11 March 2026 00:31:08 +0000 (0:00:00.372) 0:04:10.793 ******* 2026-03-11 00:32:08.998185 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-11 00:32:08.998194 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:08.998203 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-11 00:32:08.998212 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-11 00:32:08.998220 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:32:08.998229 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-11 00:32:08.998238 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:32:08.998246 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-11 00:32:08.998255 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:32:08.998263 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-11 00:32:08.998272 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:32:08.998281 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:32:08.998289 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-11 00:32:08.998298 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:32:08.998306 | orchestrator | 2026-03-11 00:32:08.998315 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-11 00:32:08.998338 | orchestrator | Wednesday 11 March 2026 00:31:08 +0000 (0:00:00.330) 0:04:11.124 ******* 2026-03-11 00:32:08.998348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:32:08.998357 | orchestrator | 2026-03-11 00:32:08.998365 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-11 00:32:08.998374 | orchestrator | Wednesday 11 March 2026 00:31:09 +0000 (0:00:00.457) 0:04:11.582 ******* 2026-03-11 00:32:08.998383 | orchestrator | changed: [testbed-manager] 2026-03-11 00:32:08.998392 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:08.998400 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:08.998409 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:08.998418 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:08.998427 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:08.998435 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:08.998444 | orchestrator | 2026-03-11 00:32:08.998452 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-11 00:32:08.998461 | orchestrator | Wednesday 11 March 2026 00:31:42 +0000 (0:00:33.277) 0:04:44.859 ******* 2026-03-11 00:32:08.998479 | orchestrator | changed: [testbed-manager] 2026-03-11 00:32:08.998488 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:08.998497 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:08.998505 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:08.998514 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:08.998523 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:08.998531 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:08.998540 | orchestrator | 2026-03-11 00:32:08.998549 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-11 00:32:08.998557 | orchestrator | Wednesday 11 March 2026 00:31:50 +0000 (0:00:08.541) 0:04:53.401 ******* 2026-03-11 00:32:08.998571 | orchestrator | changed: [testbed-manager] 2026-03-11 00:32:08.998580 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:08.998589 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:08.998597 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:08.998606 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:08.998614 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:08.998623 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:08.998632 | orchestrator | 2026-03-11 00:32:08.998640 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-11 00:32:08.998649 | orchestrator | Wednesday 11 March 2026 00:31:59 +0000 (0:00:08.898) 0:05:02.300 ******* 2026-03-11 00:32:08.998664 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:08.998673 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:08.998681 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:08.998690 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:08.998699 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:08.998707 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:08.998716 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:08.998725 | orchestrator | 2026-03-11 00:32:08.998734 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-11 00:32:08.998742 | orchestrator | Wednesday 11 March 2026 00:32:01 +0000 (0:00:02.095) 0:05:04.395 ******* 2026-03-11 00:32:08.998751 | orchestrator | changed: [testbed-manager] 2026-03-11 00:32:08.998760 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:08.998768 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:08.998777 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:08.998786 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:08.998794 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:08.998803 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:08.998812 | orchestrator | 2026-03-11 00:32:08.998827 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-11 00:32:20.303902 | orchestrator | Wednesday 11 March 2026 00:32:08 +0000 (0:00:07.155) 0:05:11.550 ******* 2026-03-11 00:32:20.304007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:32:20.304020 | orchestrator | 2026-03-11 00:32:20.304042 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-11 00:32:20.304051 | orchestrator | Wednesday 11 March 2026 00:32:09 +0000 (0:00:00.402) 0:05:11.953 ******* 2026-03-11 00:32:20.304060 | orchestrator | changed: [testbed-manager] 2026-03-11 00:32:20.304069 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:20.304078 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:20.304086 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:20.304094 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:20.304102 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:20.304110 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:20.304119 | orchestrator | 2026-03-11 00:32:20.304127 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-11 00:32:20.304135 | orchestrator | Wednesday 11 March 2026 00:32:10 +0000 (0:00:00.803) 0:05:12.756 ******* 2026-03-11 00:32:20.304144 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:20.304154 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:20.304162 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:20.304170 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:20.304178 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:20.304186 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:20.304194 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:20.304201 | orchestrator | 2026-03-11 00:32:20.304208 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-11 00:32:20.304216 | orchestrator | Wednesday 11 March 2026 00:32:12 +0000 (0:00:01.831) 0:05:14.587 ******* 2026-03-11 00:32:20.304223 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:20.304231 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:20.304238 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:20.304245 | orchestrator | changed: [testbed-manager] 2026-03-11 00:32:20.304253 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:20.304261 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:20.304269 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:20.304278 | orchestrator | 2026-03-11 00:32:20.304286 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-11 00:32:20.304294 | orchestrator | Wednesday 11 March 2026 00:32:12 +0000 (0:00:00.777) 0:05:15.364 ******* 2026-03-11 00:32:20.304389 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:20.304399 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:32:20.304407 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:32:20.304415 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:32:20.304423 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:32:20.304431 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:32:20.304439 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:32:20.304448 | orchestrator | 2026-03-11 00:32:20.304456 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-11 00:32:20.304465 | orchestrator | Wednesday 11 March 2026 00:32:13 +0000 (0:00:00.253) 0:05:15.618 ******* 2026-03-11 00:32:20.304474 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:20.304482 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:32:20.304491 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:32:20.304499 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:32:20.304508 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:32:20.304516 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:32:20.304525 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:32:20.304533 | orchestrator | 2026-03-11 00:32:20.304541 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-11 00:32:20.304549 | orchestrator | Wednesday 11 March 2026 00:32:13 +0000 (0:00:00.336) 0:05:15.955 ******* 2026-03-11 00:32:20.304557 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:20.304564 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:20.304572 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:20.304580 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:20.304587 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:20.304595 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:20.304602 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:20.304610 | orchestrator | 2026-03-11 00:32:20.304618 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-11 00:32:20.304639 | orchestrator | Wednesday 11 March 2026 00:32:13 +0000 (0:00:00.253) 0:05:16.208 ******* 2026-03-11 00:32:20.304647 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:20.304655 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:32:20.304662 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:32:20.304670 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:32:20.304677 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:32:20.304685 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:32:20.304692 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:32:20.304700 | orchestrator | 2026-03-11 00:32:20.304708 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-11 00:32:20.304716 | orchestrator | Wednesday 11 March 2026 00:32:13 +0000 (0:00:00.253) 0:05:16.462 ******* 2026-03-11 00:32:20.304724 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:20.304731 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:20.304739 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:20.304747 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:20.304754 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:20.304762 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:20.304769 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:20.304777 | orchestrator | 2026-03-11 00:32:20.304785 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-11 00:32:20.304793 | orchestrator | Wednesday 11 March 2026 00:32:14 +0000 (0:00:00.301) 0:05:16.764 ******* 2026-03-11 00:32:20.304800 | orchestrator | ok: [testbed-manager] =>  2026-03-11 00:32:20.304808 | orchestrator |  docker_version: 5:27.5.1 2026-03-11 00:32:20.304815 | orchestrator | ok: [testbed-node-3] =>  2026-03-11 00:32:20.304822 | orchestrator |  docker_version: 5:27.5.1 2026-03-11 00:32:20.304829 | orchestrator | ok: [testbed-node-4] =>  2026-03-11 00:32:20.304837 | orchestrator |  docker_version: 5:27.5.1 2026-03-11 00:32:20.304844 | orchestrator | ok: [testbed-node-5] =>  2026-03-11 00:32:20.304851 | orchestrator |  docker_version: 5:27.5.1 2026-03-11 00:32:20.304872 | orchestrator | ok: [testbed-node-0] =>  2026-03-11 00:32:20.304886 | orchestrator |  docker_version: 5:27.5.1 2026-03-11 00:32:20.304893 | orchestrator | ok: [testbed-node-1] =>  2026-03-11 00:32:20.304900 | orchestrator |  docker_version: 5:27.5.1 2026-03-11 00:32:20.304907 | orchestrator | ok: [testbed-node-2] =>  2026-03-11 00:32:20.304914 | orchestrator |  docker_version: 5:27.5.1 2026-03-11 00:32:20.304922 | orchestrator | 2026-03-11 00:32:20.304929 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-11 00:32:20.304936 | orchestrator | Wednesday 11 March 2026 00:32:14 +0000 (0:00:00.257) 0:05:17.022 ******* 2026-03-11 00:32:20.304943 | orchestrator | ok: [testbed-manager] =>  2026-03-11 00:32:20.304950 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-11 00:32:20.304958 | orchestrator | ok: [testbed-node-3] =>  2026-03-11 00:32:20.304965 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-11 00:32:20.304972 | orchestrator | ok: [testbed-node-4] =>  2026-03-11 00:32:20.304979 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-11 00:32:20.304986 | orchestrator | ok: [testbed-node-5] =>  2026-03-11 00:32:20.304993 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-11 00:32:20.305000 | orchestrator | ok: [testbed-node-0] =>  2026-03-11 00:32:20.305007 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-11 00:32:20.305015 | orchestrator | ok: [testbed-node-1] =>  2026-03-11 00:32:20.305022 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-11 00:32:20.305029 | orchestrator | ok: [testbed-node-2] =>  2026-03-11 00:32:20.305036 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-11 00:32:20.305043 | orchestrator | 2026-03-11 00:32:20.305050 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-11 00:32:20.305058 | orchestrator | Wednesday 11 March 2026 00:32:14 +0000 (0:00:00.271) 0:05:17.293 ******* 2026-03-11 00:32:20.305065 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:20.305072 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:32:20.305079 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:32:20.305086 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:32:20.305094 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:32:20.305101 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:32:20.305108 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:32:20.305115 | orchestrator | 2026-03-11 00:32:20.305122 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-11 00:32:20.305129 | orchestrator | Wednesday 11 March 2026 00:32:14 +0000 (0:00:00.255) 0:05:17.549 ******* 2026-03-11 00:32:20.305137 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:20.305144 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:32:20.305151 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:32:20.305158 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:32:20.305165 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:32:20.305172 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:32:20.305180 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:32:20.305187 | orchestrator | 2026-03-11 00:32:20.305194 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-11 00:32:20.305201 | orchestrator | Wednesday 11 March 2026 00:32:15 +0000 (0:00:00.250) 0:05:17.799 ******* 2026-03-11 00:32:20.305211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:32:20.305219 | orchestrator | 2026-03-11 00:32:20.305227 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-11 00:32:20.305234 | orchestrator | Wednesday 11 March 2026 00:32:15 +0000 (0:00:00.389) 0:05:18.189 ******* 2026-03-11 00:32:20.305241 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:20.305248 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:20.305255 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:20.305262 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:20.305269 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:20.305277 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:20.305289 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:20.305307 | orchestrator | 2026-03-11 00:32:20.305315 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-11 00:32:20.305322 | orchestrator | Wednesday 11 March 2026 00:32:16 +0000 (0:00:01.130) 0:05:19.319 ******* 2026-03-11 00:32:20.305330 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:20.305337 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:20.305344 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:20.305352 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:20.305359 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:20.305366 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:20.305377 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:20.305385 | orchestrator | 2026-03-11 00:32:20.305392 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-11 00:32:20.305400 | orchestrator | Wednesday 11 March 2026 00:32:19 +0000 (0:00:03.174) 0:05:22.494 ******* 2026-03-11 00:32:20.305407 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-11 00:32:20.305415 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-11 00:32:20.305422 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-11 00:32:20.305429 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-11 00:32:20.305436 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-11 00:32:20.305444 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-11 00:32:20.305451 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:20.305458 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-11 00:32:20.305465 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-11 00:32:20.305472 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-11 00:32:20.305479 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:32:20.305486 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-11 00:32:20.305493 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-11 00:32:20.305500 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-11 00:32:20.305507 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:32:20.305514 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-11 00:32:20.305526 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-11 00:33:22.634761 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-11 00:33:22.634878 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:33:22.634896 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-11 00:33:22.634908 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-11 00:33:22.634919 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-11 00:33:22.634930 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:33:22.634942 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:33:22.634952 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-11 00:33:22.634964 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-11 00:33:22.634974 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-11 00:33:22.634986 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:33:22.634997 | orchestrator | 2026-03-11 00:33:22.635010 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-11 00:33:22.635023 | orchestrator | Wednesday 11 March 2026 00:32:20 +0000 (0:00:00.580) 0:05:23.075 ******* 2026-03-11 00:33:22.635034 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:22.635045 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:22.635056 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:22.635067 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:22.635077 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:22.635089 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:22.635100 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:22.635138 | orchestrator | 2026-03-11 00:33:22.635203 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-11 00:33:22.635227 | orchestrator | Wednesday 11 March 2026 00:32:27 +0000 (0:00:06.972) 0:05:30.047 ******* 2026-03-11 00:33:22.635245 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:22.635260 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:22.635271 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:22.635281 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:22.635293 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:22.635306 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:22.635317 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:22.635330 | orchestrator | 2026-03-11 00:33:22.635341 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-11 00:33:22.635354 | orchestrator | Wednesday 11 March 2026 00:32:28 +0000 (0:00:01.360) 0:05:31.408 ******* 2026-03-11 00:33:22.635366 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:22.635394 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:22.635417 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:22.635430 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:22.635442 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:22.635455 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:22.635467 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:22.635479 | orchestrator | 2026-03-11 00:33:22.635492 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-11 00:33:22.635505 | orchestrator | Wednesday 11 March 2026 00:32:37 +0000 (0:00:08.364) 0:05:39.773 ******* 2026-03-11 00:33:22.635517 | orchestrator | changed: [testbed-manager] 2026-03-11 00:33:22.635529 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:22.635541 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:22.635554 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:22.635566 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:22.635578 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:22.635590 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:22.635602 | orchestrator | 2026-03-11 00:33:22.635615 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-11 00:33:22.635628 | orchestrator | Wednesday 11 March 2026 00:32:40 +0000 (0:00:03.576) 0:05:43.350 ******* 2026-03-11 00:33:22.635638 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:22.635649 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:22.635660 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:22.635670 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:22.635681 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:22.635692 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:22.635702 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:22.635713 | orchestrator | 2026-03-11 00:33:22.635724 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-11 00:33:22.635734 | orchestrator | Wednesday 11 March 2026 00:32:42 +0000 (0:00:01.371) 0:05:44.721 ******* 2026-03-11 00:33:22.635745 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:22.635756 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:22.635767 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:22.635777 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:22.635788 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:22.635799 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:22.635809 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:22.635821 | orchestrator | 2026-03-11 00:33:22.635832 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-11 00:33:22.635843 | orchestrator | Wednesday 11 March 2026 00:32:43 +0000 (0:00:01.647) 0:05:46.368 ******* 2026-03-11 00:33:22.635853 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:33:22.635864 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:33:22.635875 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:33:22.635886 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:33:22.635896 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:33:22.635917 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:33:22.635928 | orchestrator | changed: [testbed-manager] 2026-03-11 00:33:22.635938 | orchestrator | 2026-03-11 00:33:22.635949 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-11 00:33:22.635960 | orchestrator | Wednesday 11 March 2026 00:32:44 +0000 (0:00:00.622) 0:05:46.991 ******* 2026-03-11 00:33:22.635971 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:22.635982 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:22.635992 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:22.636003 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:22.636014 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:22.636024 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:22.636035 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:22.636046 | orchestrator | 2026-03-11 00:33:22.636056 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-11 00:33:22.636085 | orchestrator | Wednesday 11 March 2026 00:32:54 +0000 (0:00:09.612) 0:05:56.603 ******* 2026-03-11 00:33:22.636097 | orchestrator | changed: [testbed-manager] 2026-03-11 00:33:22.636108 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:22.636118 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:22.636129 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:22.636140 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:22.636173 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:22.636186 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:22.636197 | orchestrator | 2026-03-11 00:33:22.636208 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-11 00:33:22.636219 | orchestrator | Wednesday 11 March 2026 00:32:54 +0000 (0:00:00.954) 0:05:57.558 ******* 2026-03-11 00:33:22.636229 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:22.636240 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:22.636251 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:22.636262 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:22.636272 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:22.636283 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:22.636293 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:22.636304 | orchestrator | 2026-03-11 00:33:22.636315 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-11 00:33:22.636326 | orchestrator | Wednesday 11 March 2026 00:33:04 +0000 (0:00:09.374) 0:06:06.932 ******* 2026-03-11 00:33:22.636336 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:22.636347 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:22.636357 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:22.636368 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:22.636379 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:22.636389 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:22.636400 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:22.636410 | orchestrator | 2026-03-11 00:33:22.636421 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-11 00:33:22.636432 | orchestrator | Wednesday 11 March 2026 00:33:15 +0000 (0:00:11.638) 0:06:18.571 ******* 2026-03-11 00:33:22.636443 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-11 00:33:22.636454 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-11 00:33:22.636464 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-11 00:33:22.636475 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-11 00:33:22.636486 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-11 00:33:22.636497 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-11 00:33:22.636507 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-11 00:33:22.636518 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-11 00:33:22.636528 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-11 00:33:22.636539 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-11 00:33:22.636609 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-11 00:33:22.636623 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-11 00:33:22.636633 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-11 00:33:22.636644 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-11 00:33:22.636655 | orchestrator | 2026-03-11 00:33:22.636666 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-11 00:33:22.636677 | orchestrator | Wednesday 11 March 2026 00:33:17 +0000 (0:00:01.211) 0:06:19.782 ******* 2026-03-11 00:33:22.636687 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:33:22.636698 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:33:22.636709 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:33:22.636720 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:33:22.636730 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:33:22.636741 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:33:22.636751 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:33:22.636762 | orchestrator | 2026-03-11 00:33:22.636773 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-11 00:33:22.636784 | orchestrator | Wednesday 11 March 2026 00:33:17 +0000 (0:00:00.533) 0:06:20.316 ******* 2026-03-11 00:33:22.636794 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:22.636805 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:22.636816 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:22.636826 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:22.636837 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:22.636847 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:22.636858 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:22.636869 | orchestrator | 2026-03-11 00:33:22.636885 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-11 00:33:22.636897 | orchestrator | Wednesday 11 March 2026 00:33:21 +0000 (0:00:03.931) 0:06:24.247 ******* 2026-03-11 00:33:22.636908 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:33:22.636919 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:33:22.636929 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:33:22.636940 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:33:22.636951 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:33:22.636961 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:33:22.636972 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:33:22.636982 | orchestrator | 2026-03-11 00:33:22.636994 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-11 00:33:22.637005 | orchestrator | Wednesday 11 March 2026 00:33:22 +0000 (0:00:00.502) 0:06:24.750 ******* 2026-03-11 00:33:22.637016 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-11 00:33:22.637027 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-11 00:33:22.637038 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:33:22.637049 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-11 00:33:22.637060 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-11 00:33:22.637070 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:33:22.637081 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-11 00:33:22.637092 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-11 00:33:22.637103 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:33:22.637122 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-11 00:33:42.520815 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-11 00:33:42.520959 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:33:42.520989 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-11 00:33:42.521008 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-11 00:33:42.521026 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:33:42.521080 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-11 00:33:42.521103 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-11 00:33:42.521185 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:33:42.521209 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-11 00:33:42.521228 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-11 00:33:42.521336 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:33:42.521353 | orchestrator | 2026-03-11 00:33:42.521368 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-11 00:33:42.521382 | orchestrator | Wednesday 11 March 2026 00:33:22 +0000 (0:00:00.717) 0:06:25.467 ******* 2026-03-11 00:33:42.521394 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:33:42.521406 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:33:42.521419 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:33:42.521438 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:33:42.521456 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:33:42.521474 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:33:42.521492 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:33:42.521510 | orchestrator | 2026-03-11 00:33:42.521527 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-11 00:33:42.521546 | orchestrator | Wednesday 11 March 2026 00:33:23 +0000 (0:00:00.493) 0:06:25.961 ******* 2026-03-11 00:33:42.521566 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:33:42.521586 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:33:42.521606 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:33:42.521624 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:33:42.521644 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:33:42.521663 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:33:42.521682 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:33:42.521700 | orchestrator | 2026-03-11 00:33:42.521720 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-11 00:33:42.521738 | orchestrator | Wednesday 11 March 2026 00:33:23 +0000 (0:00:00.493) 0:06:26.454 ******* 2026-03-11 00:33:42.521757 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:33:42.521777 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:33:42.521795 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:33:42.521811 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:33:42.521822 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:33:42.521832 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:33:42.521843 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:33:42.521854 | orchestrator | 2026-03-11 00:33:42.521864 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-11 00:33:42.521875 | orchestrator | Wednesday 11 March 2026 00:33:24 +0000 (0:00:00.507) 0:06:26.962 ******* 2026-03-11 00:33:42.521886 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:42.521897 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:42.521907 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:42.521918 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:42.521928 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:42.521939 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:42.521949 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:42.521960 | orchestrator | 2026-03-11 00:33:42.521973 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-11 00:33:42.521992 | orchestrator | Wednesday 11 March 2026 00:33:26 +0000 (0:00:02.028) 0:06:28.991 ******* 2026-03-11 00:33:42.522011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:33:42.522144 | orchestrator | 2026-03-11 00:33:42.522167 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-11 00:33:42.522187 | orchestrator | Wednesday 11 March 2026 00:33:27 +0000 (0:00:00.797) 0:06:29.788 ******* 2026-03-11 00:33:42.522245 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:42.522268 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:42.522287 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:42.522304 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:42.522323 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:42.522335 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:42.522345 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:42.522356 | orchestrator | 2026-03-11 00:33:42.522367 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-11 00:33:42.522378 | orchestrator | Wednesday 11 March 2026 00:33:28 +0000 (0:00:00.818) 0:06:30.606 ******* 2026-03-11 00:33:42.522389 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:42.522399 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:42.522410 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:42.522421 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:42.522438 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:42.522463 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:42.522486 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:42.522504 | orchestrator | 2026-03-11 00:33:42.522521 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-11 00:33:42.522538 | orchestrator | Wednesday 11 March 2026 00:33:28 +0000 (0:00:00.875) 0:06:31.482 ******* 2026-03-11 00:33:42.522554 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:42.522571 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:42.522591 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:42.522609 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:42.522628 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:42.522646 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:42.522664 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:42.522675 | orchestrator | 2026-03-11 00:33:42.522686 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-11 00:33:42.522724 | orchestrator | Wednesday 11 March 2026 00:33:30 +0000 (0:00:01.632) 0:06:33.115 ******* 2026-03-11 00:33:42.522736 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:33:42.522746 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:42.522758 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:42.522768 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:42.522779 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:42.522790 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:42.522800 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:42.522811 | orchestrator | 2026-03-11 00:33:42.522822 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-11 00:33:42.522832 | orchestrator | Wednesday 11 March 2026 00:33:32 +0000 (0:00:01.488) 0:06:34.603 ******* 2026-03-11 00:33:42.522843 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:42.522854 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:42.522864 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:42.522875 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:42.522885 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:42.522896 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:42.522906 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:42.522917 | orchestrator | 2026-03-11 00:33:42.522928 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-11 00:33:42.522938 | orchestrator | Wednesday 11 March 2026 00:33:33 +0000 (0:00:01.369) 0:06:35.973 ******* 2026-03-11 00:33:42.522949 | orchestrator | changed: [testbed-manager] 2026-03-11 00:33:42.522959 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:42.522970 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:42.522980 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:42.522991 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:42.523001 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:42.523012 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:42.523024 | orchestrator | 2026-03-11 00:33:42.523042 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-11 00:33:42.523077 | orchestrator | Wednesday 11 March 2026 00:33:34 +0000 (0:00:01.427) 0:06:37.400 ******* 2026-03-11 00:33:42.523097 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:33:42.523206 | orchestrator | 2026-03-11 00:33:42.523244 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-11 00:33:42.523257 | orchestrator | Wednesday 11 March 2026 00:33:35 +0000 (0:00:00.987) 0:06:38.388 ******* 2026-03-11 00:33:42.523268 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:42.523279 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:42.523289 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:42.523300 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:42.523311 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:42.523321 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:42.523332 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:42.523342 | orchestrator | 2026-03-11 00:33:42.523353 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-11 00:33:42.523364 | orchestrator | Wednesday 11 March 2026 00:33:37 +0000 (0:00:01.401) 0:06:39.789 ******* 2026-03-11 00:33:42.523375 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:42.523385 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:42.523396 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:42.523406 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:42.523417 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:42.523427 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:42.523438 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:42.523448 | orchestrator | 2026-03-11 00:33:42.523459 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-11 00:33:42.523470 | orchestrator | Wednesday 11 March 2026 00:33:38 +0000 (0:00:01.169) 0:06:40.959 ******* 2026-03-11 00:33:42.523480 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:42.523491 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:42.523502 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:42.523512 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:42.523523 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:42.523533 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:42.523544 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:42.523554 | orchestrator | 2026-03-11 00:33:42.523565 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-11 00:33:42.523576 | orchestrator | Wednesday 11 March 2026 00:33:39 +0000 (0:00:01.241) 0:06:42.201 ******* 2026-03-11 00:33:42.523586 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:42.523597 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:42.523627 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:42.523647 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:42.523665 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:42.523682 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:42.523699 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:42.523717 | orchestrator | 2026-03-11 00:33:42.523735 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-11 00:33:42.523751 | orchestrator | Wednesday 11 March 2026 00:33:41 +0000 (0:00:01.584) 0:06:43.786 ******* 2026-03-11 00:33:42.523771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:33:42.523790 | orchestrator | 2026-03-11 00:33:42.523810 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-11 00:33:42.523827 | orchestrator | Wednesday 11 March 2026 00:33:42 +0000 (0:00:00.967) 0:06:44.754 ******* 2026-03-11 00:33:42.523847 | orchestrator | 2026-03-11 00:33:42.523865 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-11 00:33:42.523883 | orchestrator | Wednesday 11 March 2026 00:33:42 +0000 (0:00:00.041) 0:06:44.796 ******* 2026-03-11 00:33:42.523916 | orchestrator | 2026-03-11 00:33:42.523936 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-11 00:33:42.523956 | orchestrator | Wednesday 11 March 2026 00:33:42 +0000 (0:00:00.040) 0:06:44.836 ******* 2026-03-11 00:33:42.523975 | orchestrator | 2026-03-11 00:33:42.523992 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-11 00:33:42.524028 | orchestrator | Wednesday 11 March 2026 00:33:42 +0000 (0:00:00.049) 0:06:44.886 ******* 2026-03-11 00:34:08.265809 | orchestrator | 2026-03-11 00:34:08.265921 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-11 00:34:08.265938 | orchestrator | Wednesday 11 March 2026 00:33:42 +0000 (0:00:00.053) 0:06:44.940 ******* 2026-03-11 00:34:08.265950 | orchestrator | 2026-03-11 00:34:08.265962 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-11 00:34:08.265973 | orchestrator | Wednesday 11 March 2026 00:33:42 +0000 (0:00:00.040) 0:06:44.980 ******* 2026-03-11 00:34:08.265984 | orchestrator | 2026-03-11 00:34:08.265995 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-11 00:34:08.266006 | orchestrator | Wednesday 11 March 2026 00:33:42 +0000 (0:00:00.051) 0:06:45.032 ******* 2026-03-11 00:34:08.266118 | orchestrator | 2026-03-11 00:34:08.266136 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-11 00:34:08.266147 | orchestrator | Wednesday 11 March 2026 00:33:42 +0000 (0:00:00.041) 0:06:45.074 ******* 2026-03-11 00:34:08.266159 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:08.266171 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:08.266182 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:08.266192 | orchestrator | 2026-03-11 00:34:08.266203 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-11 00:34:08.266214 | orchestrator | Wednesday 11 March 2026 00:33:43 +0000 (0:00:01.302) 0:06:46.376 ******* 2026-03-11 00:34:08.266226 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:08.266237 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:08.266248 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:08.266259 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:08.266270 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:08.266280 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:08.266291 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:08.266302 | orchestrator | 2026-03-11 00:34:08.266313 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-11 00:34:08.266324 | orchestrator | Wednesday 11 March 2026 00:33:45 +0000 (0:00:01.462) 0:06:47.839 ******* 2026-03-11 00:34:08.266334 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:08.266348 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:08.266361 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:08.266373 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:08.266385 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:08.266397 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:08.266410 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:08.266422 | orchestrator | 2026-03-11 00:34:08.266436 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-11 00:34:08.266448 | orchestrator | Wednesday 11 March 2026 00:33:46 +0000 (0:00:01.282) 0:06:49.122 ******* 2026-03-11 00:34:08.266461 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:34:08.266474 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:08.266486 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:08.266498 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:08.266511 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:08.266523 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:08.266537 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:08.266549 | orchestrator | 2026-03-11 00:34:08.266562 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-11 00:34:08.266575 | orchestrator | Wednesday 11 March 2026 00:33:49 +0000 (0:00:02.511) 0:06:51.633 ******* 2026-03-11 00:34:08.266617 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:34:08.266630 | orchestrator | 2026-03-11 00:34:08.266642 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-11 00:34:08.266656 | orchestrator | Wednesday 11 March 2026 00:33:49 +0000 (0:00:00.094) 0:06:51.728 ******* 2026-03-11 00:34:08.266669 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:08.266681 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:08.266694 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:08.266706 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:08.266719 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:08.266731 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:08.266741 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:08.266752 | orchestrator | 2026-03-11 00:34:08.266763 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-11 00:34:08.266774 | orchestrator | Wednesday 11 March 2026 00:33:50 +0000 (0:00:00.941) 0:06:52.670 ******* 2026-03-11 00:34:08.266785 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:34:08.266795 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:34:08.266821 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:34:08.266832 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:34:08.266843 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:34:08.266853 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:34:08.266864 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:34:08.266874 | orchestrator | 2026-03-11 00:34:08.266885 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-11 00:34:08.266896 | orchestrator | Wednesday 11 March 2026 00:33:50 +0000 (0:00:00.433) 0:06:53.104 ******* 2026-03-11 00:34:08.266908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:34:08.266922 | orchestrator | 2026-03-11 00:34:08.266933 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-11 00:34:08.266944 | orchestrator | Wednesday 11 March 2026 00:33:51 +0000 (0:00:00.879) 0:06:53.983 ******* 2026-03-11 00:34:08.266954 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:08.266965 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:08.266976 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:08.266986 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:08.266997 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:08.267008 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:08.267018 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:08.267029 | orchestrator | 2026-03-11 00:34:08.267040 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-11 00:34:08.267051 | orchestrator | Wednesday 11 March 2026 00:33:52 +0000 (0:00:00.862) 0:06:54.846 ******* 2026-03-11 00:34:08.267062 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-11 00:34:08.267116 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-11 00:34:08.267129 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-11 00:34:08.267140 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-11 00:34:08.267150 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-11 00:34:08.267161 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-11 00:34:08.267172 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-11 00:34:08.267182 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-11 00:34:08.267193 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-11 00:34:08.267204 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-11 00:34:08.267215 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-11 00:34:08.267225 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-11 00:34:08.267245 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-11 00:34:08.267256 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-11 00:34:08.267267 | orchestrator | 2026-03-11 00:34:08.267278 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-11 00:34:08.267289 | orchestrator | Wednesday 11 March 2026 00:33:54 +0000 (0:00:02.462) 0:06:57.309 ******* 2026-03-11 00:34:08.267300 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:34:08.267310 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:34:08.267321 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:34:08.267332 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:34:08.267342 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:34:08.267353 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:34:08.267363 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:34:08.267374 | orchestrator | 2026-03-11 00:34:08.267385 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-11 00:34:08.267396 | orchestrator | Wednesday 11 March 2026 00:33:55 +0000 (0:00:00.547) 0:06:57.857 ******* 2026-03-11 00:34:08.267409 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:34:08.267421 | orchestrator | 2026-03-11 00:34:08.267432 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-11 00:34:08.267443 | orchestrator | Wednesday 11 March 2026 00:33:55 +0000 (0:00:00.693) 0:06:58.550 ******* 2026-03-11 00:34:08.267453 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:08.267464 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:08.267475 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:08.267485 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:08.267496 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:08.267507 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:08.267517 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:08.267528 | orchestrator | 2026-03-11 00:34:08.267539 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-11 00:34:08.267549 | orchestrator | Wednesday 11 March 2026 00:33:56 +0000 (0:00:00.869) 0:06:59.419 ******* 2026-03-11 00:34:08.267560 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:08.267571 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:08.267581 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:08.267592 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:08.267602 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:08.267613 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:08.267623 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:08.267634 | orchestrator | 2026-03-11 00:34:08.267645 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-11 00:34:08.267656 | orchestrator | Wednesday 11 March 2026 00:33:57 +0000 (0:00:00.892) 0:07:00.311 ******* 2026-03-11 00:34:08.267667 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:34:08.267677 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:34:08.267688 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:34:08.267698 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:34:08.267709 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:34:08.267720 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:34:08.267730 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:34:08.267741 | orchestrator | 2026-03-11 00:34:08.267752 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-11 00:34:08.267763 | orchestrator | Wednesday 11 March 2026 00:33:58 +0000 (0:00:00.457) 0:07:00.769 ******* 2026-03-11 00:34:08.267773 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:08.267784 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:08.267795 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:08.267805 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:08.267816 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:08.267826 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:08.267843 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:08.267854 | orchestrator | 2026-03-11 00:34:08.267865 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-11 00:34:08.267876 | orchestrator | Wednesday 11 March 2026 00:33:59 +0000 (0:00:01.460) 0:07:02.229 ******* 2026-03-11 00:34:08.267887 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:34:08.267898 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:34:08.267909 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:34:08.267919 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:34:08.267930 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:34:08.267941 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:34:08.267951 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:34:08.267962 | orchestrator | 2026-03-11 00:34:08.267973 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-11 00:34:08.267984 | orchestrator | Wednesday 11 March 2026 00:34:00 +0000 (0:00:00.435) 0:07:02.664 ******* 2026-03-11 00:34:08.267994 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:08.268005 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:08.268016 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:08.268026 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:08.268037 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:08.268048 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:08.268065 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:40.968746 | orchestrator | 2026-03-11 00:34:40.969762 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-11 00:34:40.969802 | orchestrator | Wednesday 11 March 2026 00:34:08 +0000 (0:00:08.164) 0:07:10.829 ******* 2026-03-11 00:34:40.969815 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:40.969828 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:40.969840 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:40.969851 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:40.969862 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:40.969873 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:40.969884 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:40.969895 | orchestrator | 2026-03-11 00:34:40.969907 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-11 00:34:40.969918 | orchestrator | Wednesday 11 March 2026 00:34:09 +0000 (0:00:01.584) 0:07:12.413 ******* 2026-03-11 00:34:40.969929 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:40.969940 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:40.969951 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:40.969962 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:40.969973 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:40.969984 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:40.969995 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:40.970006 | orchestrator | 2026-03-11 00:34:40.970096 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-11 00:34:40.970108 | orchestrator | Wednesday 11 March 2026 00:34:11 +0000 (0:00:01.838) 0:07:14.252 ******* 2026-03-11 00:34:40.970119 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:40.970130 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:40.970141 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:40.970152 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:40.970163 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:40.970174 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:40.970185 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:40.970196 | orchestrator | 2026-03-11 00:34:40.970207 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-11 00:34:40.970219 | orchestrator | Wednesday 11 March 2026 00:34:13 +0000 (0:00:01.719) 0:07:15.971 ******* 2026-03-11 00:34:40.970230 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:40.970241 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:40.970252 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:40.970263 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:40.970297 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:40.970308 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:40.970319 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:40.970330 | orchestrator | 2026-03-11 00:34:40.970341 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-11 00:34:40.970352 | orchestrator | Wednesday 11 March 2026 00:34:14 +0000 (0:00:00.846) 0:07:16.818 ******* 2026-03-11 00:34:40.970363 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:34:40.970374 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:34:40.970386 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:34:40.970397 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:34:40.970408 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:34:40.970419 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:34:40.970430 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:34:40.970441 | orchestrator | 2026-03-11 00:34:40.970452 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-11 00:34:40.970463 | orchestrator | Wednesday 11 March 2026 00:34:15 +0000 (0:00:00.977) 0:07:17.796 ******* 2026-03-11 00:34:40.970473 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:34:40.970484 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:34:40.970495 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:34:40.970506 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:34:40.970516 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:34:40.970527 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:34:40.970538 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:34:40.970549 | orchestrator | 2026-03-11 00:34:40.970560 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-11 00:34:40.970592 | orchestrator | Wednesday 11 March 2026 00:34:15 +0000 (0:00:00.544) 0:07:18.341 ******* 2026-03-11 00:34:40.970603 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:40.970614 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:40.970625 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:40.970636 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:40.970647 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:40.970657 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:40.970668 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:40.970679 | orchestrator | 2026-03-11 00:34:40.970742 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-11 00:34:40.970757 | orchestrator | Wednesday 11 March 2026 00:34:16 +0000 (0:00:00.515) 0:07:18.857 ******* 2026-03-11 00:34:40.970768 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:40.970779 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:40.970790 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:40.970800 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:40.970812 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:40.970823 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:40.970833 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:40.970844 | orchestrator | 2026-03-11 00:34:40.970855 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-11 00:34:40.970866 | orchestrator | Wednesday 11 March 2026 00:34:16 +0000 (0:00:00.505) 0:07:19.362 ******* 2026-03-11 00:34:40.970877 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:40.970888 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:40.970898 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:40.970942 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:40.970954 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:40.970965 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:40.970976 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:40.970986 | orchestrator | 2026-03-11 00:34:40.970997 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-11 00:34:40.971008 | orchestrator | Wednesday 11 March 2026 00:34:17 +0000 (0:00:00.654) 0:07:20.017 ******* 2026-03-11 00:34:40.971037 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:40.971048 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:40.971059 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:40.971079 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:40.971090 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:40.971101 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:40.971112 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:40.971123 | orchestrator | 2026-03-11 00:34:40.971155 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-11 00:34:40.971167 | orchestrator | Wednesday 11 March 2026 00:34:23 +0000 (0:00:05.576) 0:07:25.594 ******* 2026-03-11 00:34:40.971178 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:34:40.971189 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:34:40.971200 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:34:40.971210 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:34:40.971221 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:34:40.971232 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:34:40.971243 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:34:40.971254 | orchestrator | 2026-03-11 00:34:40.971265 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-11 00:34:40.971276 | orchestrator | Wednesday 11 March 2026 00:34:23 +0000 (0:00:00.520) 0:07:26.115 ******* 2026-03-11 00:34:40.971289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:34:40.971303 | orchestrator | 2026-03-11 00:34:40.971314 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-11 00:34:40.971325 | orchestrator | Wednesday 11 March 2026 00:34:24 +0000 (0:00:00.980) 0:07:27.096 ******* 2026-03-11 00:34:40.971336 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:40.971347 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:40.971358 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:40.971368 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:40.971379 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:40.971415 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:40.971426 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:40.971437 | orchestrator | 2026-03-11 00:34:40.971448 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-11 00:34:40.971459 | orchestrator | Wednesday 11 March 2026 00:34:26 +0000 (0:00:02.101) 0:07:29.197 ******* 2026-03-11 00:34:40.971469 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:40.971480 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:40.971505 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:40.971516 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:40.971526 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:40.971537 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:40.971548 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:40.971558 | orchestrator | 2026-03-11 00:34:40.971569 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-11 00:34:40.971580 | orchestrator | Wednesday 11 March 2026 00:34:27 +0000 (0:00:01.110) 0:07:30.308 ******* 2026-03-11 00:34:40.971591 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:40.971602 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:40.971612 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:40.971623 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:40.971633 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:40.971644 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:40.971655 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:40.971665 | orchestrator | 2026-03-11 00:34:40.971676 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-11 00:34:40.971687 | orchestrator | Wednesday 11 March 2026 00:34:28 +0000 (0:00:00.848) 0:07:31.156 ******* 2026-03-11 00:34:40.971698 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-11 00:34:40.971711 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-11 00:34:40.971730 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-11 00:34:40.971741 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-11 00:34:40.971752 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-11 00:34:40.971769 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-11 00:34:40.971780 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-11 00:34:40.971791 | orchestrator | 2026-03-11 00:34:40.971802 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-11 00:34:40.971813 | orchestrator | Wednesday 11 March 2026 00:34:30 +0000 (0:00:02.061) 0:07:33.218 ******* 2026-03-11 00:34:40.971824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:34:40.971835 | orchestrator | 2026-03-11 00:34:40.971846 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-11 00:34:40.971857 | orchestrator | Wednesday 11 March 2026 00:34:31 +0000 (0:00:00.786) 0:07:34.004 ******* 2026-03-11 00:34:40.971868 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:40.971916 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:40.971927 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:40.971938 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:40.971949 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:40.971960 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:40.971971 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:40.971981 | orchestrator | 2026-03-11 00:34:40.972000 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-11 00:35:12.501432 | orchestrator | Wednesday 11 March 2026 00:34:40 +0000 (0:00:09.526) 0:07:43.531 ******* 2026-03-11 00:35:12.501517 | orchestrator | ok: [testbed-manager] 2026-03-11 00:35:12.501527 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:35:12.501533 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:35:12.501540 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:35:12.501546 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:35:12.501553 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:35:12.501559 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:35:12.501566 | orchestrator | 2026-03-11 00:35:12.501573 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-11 00:35:12.501580 | orchestrator | Wednesday 11 March 2026 00:34:43 +0000 (0:00:02.054) 0:07:45.585 ******* 2026-03-11 00:35:12.501587 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:35:12.501594 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:35:12.501600 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:35:12.501606 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:35:12.501612 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:35:12.501619 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:35:12.501625 | orchestrator | 2026-03-11 00:35:12.501631 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-11 00:35:12.501637 | orchestrator | Wednesday 11 March 2026 00:34:44 +0000 (0:00:01.358) 0:07:46.944 ******* 2026-03-11 00:35:12.501643 | orchestrator | changed: [testbed-manager] 2026-03-11 00:35:12.501651 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:35:12.501657 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:35:12.501663 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:35:12.501669 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:35:12.501696 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:35:12.501703 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:35:12.501709 | orchestrator | 2026-03-11 00:35:12.501716 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-11 00:35:12.501722 | orchestrator | 2026-03-11 00:35:12.501728 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-11 00:35:12.501735 | orchestrator | Wednesday 11 March 2026 00:34:45 +0000 (0:00:01.287) 0:07:48.231 ******* 2026-03-11 00:35:12.501741 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:35:12.501747 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:35:12.501753 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:35:12.501759 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:35:12.501765 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:35:12.501770 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:35:12.501777 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:35:12.501783 | orchestrator | 2026-03-11 00:35:12.501788 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-11 00:35:12.501794 | orchestrator | 2026-03-11 00:35:12.501800 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-11 00:35:12.501806 | orchestrator | Wednesday 11 March 2026 00:34:46 +0000 (0:00:00.686) 0:07:48.917 ******* 2026-03-11 00:35:12.501813 | orchestrator | changed: [testbed-manager] 2026-03-11 00:35:12.501819 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:35:12.501825 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:35:12.501832 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:35:12.501838 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:35:12.501844 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:35:12.501850 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:35:12.501856 | orchestrator | 2026-03-11 00:35:12.501862 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-11 00:35:12.501869 | orchestrator | Wednesday 11 March 2026 00:34:47 +0000 (0:00:01.297) 0:07:50.214 ******* 2026-03-11 00:35:12.501875 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:35:12.501881 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:35:12.501887 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:35:12.501893 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:35:12.501900 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:35:12.501906 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:35:12.501912 | orchestrator | ok: [testbed-manager] 2026-03-11 00:35:12.501918 | orchestrator | 2026-03-11 00:35:12.501924 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-11 00:35:12.501954 | orchestrator | Wednesday 11 March 2026 00:34:49 +0000 (0:00:01.971) 0:07:52.186 ******* 2026-03-11 00:35:12.501961 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:35:12.501967 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:35:12.501974 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:35:12.501980 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:35:12.501987 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:35:12.501993 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:35:12.502048 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:35:12.502057 | orchestrator | 2026-03-11 00:35:12.502064 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-11 00:35:12.502071 | orchestrator | Wednesday 11 March 2026 00:34:50 +0000 (0:00:00.488) 0:07:52.674 ******* 2026-03-11 00:35:12.502079 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:35:12.502088 | orchestrator | 2026-03-11 00:35:12.502095 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-11 00:35:12.502102 | orchestrator | Wednesday 11 March 2026 00:34:51 +0000 (0:00:00.968) 0:07:53.642 ******* 2026-03-11 00:35:12.502111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:35:12.502123 | orchestrator | 2026-03-11 00:35:12.502128 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-11 00:35:12.502132 | orchestrator | Wednesday 11 March 2026 00:34:51 +0000 (0:00:00.771) 0:07:54.413 ******* 2026-03-11 00:35:12.502137 | orchestrator | changed: [testbed-manager] 2026-03-11 00:35:12.502141 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:35:12.502145 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:35:12.502150 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:35:12.502154 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:35:12.502158 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:35:12.502163 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:35:12.502167 | orchestrator | 2026-03-11 00:35:12.502185 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-11 00:35:12.502192 | orchestrator | Wednesday 11 March 2026 00:35:00 +0000 (0:00:08.890) 0:08:03.304 ******* 2026-03-11 00:35:12.502198 | orchestrator | changed: [testbed-manager] 2026-03-11 00:35:12.502205 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:35:12.502211 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:35:12.502217 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:35:12.502224 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:35:12.502228 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:35:12.502233 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:35:12.502237 | orchestrator | 2026-03-11 00:35:12.502241 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-11 00:35:12.502245 | orchestrator | Wednesday 11 March 2026 00:35:01 +0000 (0:00:01.077) 0:08:04.381 ******* 2026-03-11 00:35:12.502250 | orchestrator | changed: [testbed-manager] 2026-03-11 00:35:12.502254 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:35:12.502261 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:35:12.502267 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:35:12.502273 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:35:12.502279 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:35:12.502285 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:35:12.502291 | orchestrator | 2026-03-11 00:35:12.502297 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-11 00:35:12.502304 | orchestrator | Wednesday 11 March 2026 00:35:03 +0000 (0:00:01.303) 0:08:05.684 ******* 2026-03-11 00:35:12.502310 | orchestrator | changed: [testbed-manager] 2026-03-11 00:35:12.502316 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:35:12.502322 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:35:12.502328 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:35:12.502334 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:35:12.502340 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:35:12.502346 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:35:12.502352 | orchestrator | 2026-03-11 00:35:12.502358 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-11 00:35:12.502363 | orchestrator | Wednesday 11 March 2026 00:35:04 +0000 (0:00:01.874) 0:08:07.558 ******* 2026-03-11 00:35:12.502369 | orchestrator | changed: [testbed-manager] 2026-03-11 00:35:12.502375 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:35:12.502381 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:35:12.502387 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:35:12.502393 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:35:12.502398 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:35:12.502403 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:35:12.502409 | orchestrator | 2026-03-11 00:35:12.502415 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-11 00:35:12.502421 | orchestrator | Wednesday 11 March 2026 00:35:06 +0000 (0:00:01.255) 0:08:08.814 ******* 2026-03-11 00:35:12.502427 | orchestrator | changed: [testbed-manager] 2026-03-11 00:35:12.502432 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:35:12.502438 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:35:12.502450 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:35:12.502456 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:35:12.502462 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:35:12.502468 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:35:12.502475 | orchestrator | 2026-03-11 00:35:12.502480 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-11 00:35:12.502486 | orchestrator | 2026-03-11 00:35:12.502493 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-11 00:35:12.502499 | orchestrator | Wednesday 11 March 2026 00:35:07 +0000 (0:00:01.349) 0:08:10.164 ******* 2026-03-11 00:35:12.502505 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:35:12.502512 | orchestrator | 2026-03-11 00:35:12.502517 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-11 00:35:12.502523 | orchestrator | Wednesday 11 March 2026 00:35:08 +0000 (0:00:00.801) 0:08:10.965 ******* 2026-03-11 00:35:12.502529 | orchestrator | ok: [testbed-manager] 2026-03-11 00:35:12.502536 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:35:12.502542 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:35:12.502548 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:35:12.502554 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:35:12.502560 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:35:12.502566 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:35:12.502573 | orchestrator | 2026-03-11 00:35:12.502585 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-11 00:35:12.502596 | orchestrator | Wednesday 11 March 2026 00:35:09 +0000 (0:00:01.061) 0:08:12.026 ******* 2026-03-11 00:35:12.502602 | orchestrator | changed: [testbed-manager] 2026-03-11 00:35:12.502609 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:35:12.502615 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:35:12.502621 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:35:12.502628 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:35:12.502634 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:35:12.502640 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:35:12.502646 | orchestrator | 2026-03-11 00:35:12.502653 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-11 00:35:12.502659 | orchestrator | Wednesday 11 March 2026 00:35:10 +0000 (0:00:01.158) 0:08:13.185 ******* 2026-03-11 00:35:12.502666 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:35:12.502672 | orchestrator | 2026-03-11 00:35:12.502678 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-11 00:35:12.502685 | orchestrator | Wednesday 11 March 2026 00:35:11 +0000 (0:00:00.957) 0:08:14.143 ******* 2026-03-11 00:35:12.502691 | orchestrator | ok: [testbed-manager] 2026-03-11 00:35:12.502697 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:35:12.502704 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:35:12.502710 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:35:12.502716 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:35:12.502722 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:35:12.502728 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:35:12.502734 | orchestrator | 2026-03-11 00:35:12.502747 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-11 00:35:14.099187 | orchestrator | Wednesday 11 March 2026 00:35:12 +0000 (0:00:00.920) 0:08:15.063 ******* 2026-03-11 00:35:14.099292 | orchestrator | changed: [testbed-manager] 2026-03-11 00:35:14.099309 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:35:14.099322 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:35:14.099333 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:35:14.099344 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:35:14.099355 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:35:14.099366 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:35:14.099404 | orchestrator | 2026-03-11 00:35:14.099417 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:35:14.099429 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-11 00:35:14.099442 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-11 00:35:14.099454 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-11 00:35:14.099465 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-11 00:35:14.099476 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-11 00:35:14.099486 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-11 00:35:14.099497 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-11 00:35:14.099508 | orchestrator | 2026-03-11 00:35:14.099519 | orchestrator | 2026-03-11 00:35:14.099530 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:35:14.099541 | orchestrator | Wednesday 11 March 2026 00:35:13 +0000 (0:00:01.123) 0:08:16.187 ******* 2026-03-11 00:35:14.099552 | orchestrator | =============================================================================== 2026-03-11 00:35:14.099563 | orchestrator | osism.commons.packages : Install required packages --------------------- 81.05s 2026-03-11 00:35:14.099582 | orchestrator | osism.commons.packages : Download required packages -------------------- 35.36s 2026-03-11 00:35:14.099606 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.28s 2026-03-11 00:35:14.099631 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.93s 2026-03-11 00:35:14.099649 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.90s 2026-03-11 00:35:14.099668 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.64s 2026-03-11 00:35:14.099687 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.29s 2026-03-11 00:35:14.099706 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.61s 2026-03-11 00:35:14.099725 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.53s 2026-03-11 00:35:14.099744 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.37s 2026-03-11 00:35:14.099762 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.90s 2026-03-11 00:35:14.099781 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.89s 2026-03-11 00:35:14.099801 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.54s 2026-03-11 00:35:14.099841 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.43s 2026-03-11 00:35:14.099863 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.36s 2026-03-11 00:35:14.099884 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.16s 2026-03-11 00:35:14.099904 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.16s 2026-03-11 00:35:14.099951 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.97s 2026-03-11 00:35:14.099972 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.07s 2026-03-11 00:35:14.099992 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.77s 2026-03-11 00:35:14.309690 | orchestrator | + osism apply fail2ban 2026-03-11 00:35:26.656678 | orchestrator | 2026-03-11 00:35:26 | INFO  | Task 84d2ad32-dd1c-47b7-a7d0-dd2625d6db8b (fail2ban) was prepared for execution. 2026-03-11 00:35:26.656767 | orchestrator | 2026-03-11 00:35:26 | INFO  | It takes a moment until task 84d2ad32-dd1c-47b7-a7d0-dd2625d6db8b (fail2ban) has been started and output is visible here. 2026-03-11 00:35:47.383529 | orchestrator | 2026-03-11 00:35:47.383637 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-11 00:35:47.383654 | orchestrator | 2026-03-11 00:35:47.383667 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-11 00:35:47.383678 | orchestrator | Wednesday 11 March 2026 00:35:30 +0000 (0:00:00.225) 0:00:00.225 ******* 2026-03-11 00:35:47.383692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:35:47.383706 | orchestrator | 2026-03-11 00:35:47.383717 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-11 00:35:47.383728 | orchestrator | Wednesday 11 March 2026 00:35:31 +0000 (0:00:00.999) 0:00:01.224 ******* 2026-03-11 00:35:47.383740 | orchestrator | changed: [testbed-manager] 2026-03-11 00:35:47.383753 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:35:47.383763 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:35:47.383774 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:35:47.383785 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:35:47.383795 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:35:47.383806 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:35:47.383817 | orchestrator | 2026-03-11 00:35:47.383829 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-11 00:35:47.383895 | orchestrator | Wednesday 11 March 2026 00:35:43 +0000 (0:00:11.128) 0:00:12.353 ******* 2026-03-11 00:35:47.383911 | orchestrator | changed: [testbed-manager] 2026-03-11 00:35:47.383922 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:35:47.383933 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:35:47.383944 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:35:47.383955 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:35:47.383965 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:35:47.383976 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:35:47.383987 | orchestrator | 2026-03-11 00:35:47.383998 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-11 00:35:47.384009 | orchestrator | Wednesday 11 March 2026 00:35:44 +0000 (0:00:01.341) 0:00:13.695 ******* 2026-03-11 00:35:47.384020 | orchestrator | ok: [testbed-manager] 2026-03-11 00:35:47.384032 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:35:47.384043 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:35:47.384054 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:35:47.384065 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:35:47.384077 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:35:47.384089 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:35:47.384101 | orchestrator | 2026-03-11 00:35:47.384113 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-11 00:35:47.384126 | orchestrator | Wednesday 11 March 2026 00:35:45 +0000 (0:00:01.287) 0:00:14.983 ******* 2026-03-11 00:35:47.384139 | orchestrator | changed: [testbed-manager] 2026-03-11 00:35:47.384151 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:35:47.384163 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:35:47.384175 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:35:47.384188 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:35:47.384200 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:35:47.384212 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:35:47.384224 | orchestrator | 2026-03-11 00:35:47.384237 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:35:47.384250 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:35:47.384291 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:35:47.384305 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:35:47.384318 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:35:47.384330 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:35:47.384343 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:35:47.384355 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:35:47.384368 | orchestrator | 2026-03-11 00:35:47.384380 | orchestrator | 2026-03-11 00:35:47.384392 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:35:47.384405 | orchestrator | Wednesday 11 March 2026 00:35:47 +0000 (0:00:01.479) 0:00:16.462 ******* 2026-03-11 00:35:47.384418 | orchestrator | =============================================================================== 2026-03-11 00:35:47.384431 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.13s 2026-03-11 00:35:47.384442 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.48s 2026-03-11 00:35:47.384452 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.34s 2026-03-11 00:35:47.384463 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.29s 2026-03-11 00:35:47.384474 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.00s 2026-03-11 00:35:47.558944 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-11 00:35:47.559025 | orchestrator | + osism apply network 2026-03-11 00:35:59.437472 | orchestrator | 2026-03-11 00:35:59 | INFO  | Task e1e01c85-10a2-4947-b976-f19467d22f8a (network) was prepared for execution. 2026-03-11 00:35:59.437573 | orchestrator | 2026-03-11 00:35:59 | INFO  | It takes a moment until task e1e01c85-10a2-4947-b976-f19467d22f8a (network) has been started and output is visible here. 2026-03-11 00:36:26.604632 | orchestrator | 2026-03-11 00:36:26.604825 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-11 00:36:26.604844 | orchestrator | 2026-03-11 00:36:26.604853 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-11 00:36:26.604862 | orchestrator | Wednesday 11 March 2026 00:36:03 +0000 (0:00:00.251) 0:00:00.251 ******* 2026-03-11 00:36:26.604871 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:26.604880 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:36:26.604888 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:36:26.604896 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:36:26.604904 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:36:26.604912 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:36:26.604920 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:36:26.604928 | orchestrator | 2026-03-11 00:36:26.604936 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-11 00:36:26.604944 | orchestrator | Wednesday 11 March 2026 00:36:04 +0000 (0:00:00.690) 0:00:00.942 ******* 2026-03-11 00:36:26.604954 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:36:26.604965 | orchestrator | 2026-03-11 00:36:26.604973 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-11 00:36:26.604981 | orchestrator | Wednesday 11 March 2026 00:36:05 +0000 (0:00:01.223) 0:00:02.165 ******* 2026-03-11 00:36:26.605013 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:26.605021 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:36:26.605029 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:36:26.605037 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:36:26.605045 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:36:26.605052 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:36:26.605060 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:36:26.605068 | orchestrator | 2026-03-11 00:36:26.605076 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-11 00:36:26.605084 | orchestrator | Wednesday 11 March 2026 00:36:07 +0000 (0:00:02.095) 0:00:04.261 ******* 2026-03-11 00:36:26.605092 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:26.605100 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:36:26.605108 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:36:26.605116 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:36:26.605124 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:36:26.605133 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:36:26.605142 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:36:26.605151 | orchestrator | 2026-03-11 00:36:26.605160 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-11 00:36:26.605169 | orchestrator | Wednesday 11 March 2026 00:36:09 +0000 (0:00:02.002) 0:00:06.264 ******* 2026-03-11 00:36:26.605179 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-11 00:36:26.605188 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-11 00:36:26.605197 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-11 00:36:26.605206 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-11 00:36:26.605215 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-11 00:36:26.605228 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-11 00:36:26.605262 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-11 00:36:26.605277 | orchestrator | 2026-03-11 00:36:26.605291 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-11 00:36:26.605304 | orchestrator | Wednesday 11 March 2026 00:36:10 +0000 (0:00:01.077) 0:00:07.342 ******* 2026-03-11 00:36:26.605318 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-11 00:36:26.605334 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-11 00:36:26.605348 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 00:36:26.605362 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 00:36:26.605373 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-11 00:36:26.605382 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-11 00:36:26.605391 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-11 00:36:26.605400 | orchestrator | 2026-03-11 00:36:26.605409 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-11 00:36:26.605417 | orchestrator | Wednesday 11 March 2026 00:36:14 +0000 (0:00:03.424) 0:00:10.766 ******* 2026-03-11 00:36:26.605425 | orchestrator | changed: [testbed-manager] 2026-03-11 00:36:26.605433 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:36:26.605441 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:36:26.605449 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:36:26.605462 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:36:26.605470 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:36:26.605478 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:36:26.605486 | orchestrator | 2026-03-11 00:36:26.605494 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-11 00:36:26.605502 | orchestrator | Wednesday 11 March 2026 00:36:15 +0000 (0:00:01.423) 0:00:12.189 ******* 2026-03-11 00:36:26.605510 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 00:36:26.605518 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 00:36:26.605526 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-11 00:36:26.605533 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-11 00:36:26.605541 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-11 00:36:26.605556 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-11 00:36:26.605564 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-11 00:36:26.605572 | orchestrator | 2026-03-11 00:36:26.605580 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-11 00:36:26.605588 | orchestrator | Wednesday 11 March 2026 00:36:17 +0000 (0:00:01.434) 0:00:13.623 ******* 2026-03-11 00:36:26.605596 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:26.605604 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:36:26.605612 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:36:26.605620 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:36:26.605627 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:36:26.605635 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:36:26.605643 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:36:26.605651 | orchestrator | 2026-03-11 00:36:26.605659 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-11 00:36:26.605684 | orchestrator | Wednesday 11 March 2026 00:36:17 +0000 (0:00:00.981) 0:00:14.605 ******* 2026-03-11 00:36:26.605693 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:36:26.605701 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:36:26.605709 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:36:26.605717 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:36:26.605725 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:36:26.605733 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:36:26.605741 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:36:26.605771 | orchestrator | 2026-03-11 00:36:26.605780 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-11 00:36:26.605788 | orchestrator | Wednesday 11 March 2026 00:36:18 +0000 (0:00:00.581) 0:00:15.186 ******* 2026-03-11 00:36:26.605795 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:26.605803 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:36:26.605811 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:36:26.605819 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:36:26.605827 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:36:26.605834 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:36:26.605842 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:36:26.605850 | orchestrator | 2026-03-11 00:36:26.605858 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-11 00:36:26.605866 | orchestrator | Wednesday 11 March 2026 00:36:20 +0000 (0:00:02.052) 0:00:17.239 ******* 2026-03-11 00:36:26.605873 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:36:26.605881 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:36:26.605889 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:36:26.605897 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:36:26.605904 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:36:26.605912 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:36:26.605921 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-11 00:36:26.605930 | orchestrator | 2026-03-11 00:36:26.605938 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-11 00:36:26.605946 | orchestrator | Wednesday 11 March 2026 00:36:21 +0000 (0:00:00.810) 0:00:18.049 ******* 2026-03-11 00:36:26.605954 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:26.605961 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:36:26.605969 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:36:26.605977 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:36:26.605985 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:36:26.605992 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:36:26.606000 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:36:26.606008 | orchestrator | 2026-03-11 00:36:26.606069 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-11 00:36:26.606080 | orchestrator | Wednesday 11 March 2026 00:36:22 +0000 (0:00:01.526) 0:00:19.576 ******* 2026-03-11 00:36:26.606088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:36:26.606104 | orchestrator | 2026-03-11 00:36:26.606112 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-11 00:36:26.606120 | orchestrator | Wednesday 11 March 2026 00:36:23 +0000 (0:00:01.038) 0:00:20.614 ******* 2026-03-11 00:36:26.606128 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:26.606136 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:36:26.606144 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:36:26.606151 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:36:26.606159 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:36:26.606193 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:36:26.606202 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:36:26.606211 | orchestrator | 2026-03-11 00:36:26.606219 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-11 00:36:26.606227 | orchestrator | Wednesday 11 March 2026 00:36:24 +0000 (0:00:00.972) 0:00:21.587 ******* 2026-03-11 00:36:26.606235 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:26.606243 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:36:26.606251 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:36:26.606259 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:36:26.606266 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:36:26.606274 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:36:26.606287 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:36:26.606300 | orchestrator | 2026-03-11 00:36:26.606315 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-11 00:36:26.606327 | orchestrator | Wednesday 11 March 2026 00:36:25 +0000 (0:00:00.570) 0:00:22.158 ******* 2026-03-11 00:36:26.606346 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-11 00:36:26.606360 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-11 00:36:26.606373 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-11 00:36:26.606388 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-11 00:36:26.606396 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-11 00:36:26.606404 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-11 00:36:26.606412 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-11 00:36:26.606420 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-11 00:36:26.606428 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-11 00:36:26.606436 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-11 00:36:26.606443 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-11 00:36:26.606451 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-11 00:36:26.606459 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-11 00:36:26.606467 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-11 00:36:26.606475 | orchestrator | 2026-03-11 00:36:26.606492 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-11 00:36:42.217372 | orchestrator | Wednesday 11 March 2026 00:36:26 +0000 (0:00:01.054) 0:00:23.212 ******* 2026-03-11 00:36:42.217468 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:36:42.217482 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:36:42.217492 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:36:42.217501 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:36:42.217509 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:36:42.217518 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:36:42.217527 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:36:42.217535 | orchestrator | 2026-03-11 00:36:42.217545 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-11 00:36:42.217572 | orchestrator | Wednesday 11 March 2026 00:36:27 +0000 (0:00:00.579) 0:00:23.791 ******* 2026-03-11 00:36:42.217582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2026-03-11 00:36:42.217593 | orchestrator | 2026-03-11 00:36:42.217602 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-11 00:36:42.217611 | orchestrator | Wednesday 11 March 2026 00:36:31 +0000 (0:00:04.325) 0:00:28.117 ******* 2026-03-11 00:36:42.217622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:36:42.217632 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:36:42.217642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:36:42.217656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:36:42.217672 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:42.217689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:42.217698 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:36:42.217740 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:36:42.217750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:42.217759 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:36:42.217768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:42.217793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:42.217810 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:42.217819 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:42.217828 | orchestrator | 2026-03-11 00:36:42.217837 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-11 00:36:42.217846 | orchestrator | Wednesday 11 March 2026 00:36:36 +0000 (0:00:05.286) 0:00:33.403 ******* 2026-03-11 00:36:42.217855 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:36:42.217864 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:42.217873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:36:42.217881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:36:42.217890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:36:42.217899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:36:42.217908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:36:42.217916 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:36:42.217930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:42.217939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:42.217948 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:42.217963 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:42.217982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:47.710786 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:47.710900 | orchestrator | 2026-03-11 00:36:47.710918 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-11 00:36:47.710931 | orchestrator | Wednesday 11 March 2026 00:36:42 +0000 (0:00:05.420) 0:00:38.824 ******* 2026-03-11 00:36:47.710945 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:36:47.710958 | orchestrator | 2026-03-11 00:36:47.710969 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-11 00:36:47.710980 | orchestrator | Wednesday 11 March 2026 00:36:43 +0000 (0:00:00.918) 0:00:39.743 ******* 2026-03-11 00:36:47.710991 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:47.711004 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:36:47.711015 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:36:47.711025 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:36:47.711036 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:36:47.711046 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:36:47.711057 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:36:47.711068 | orchestrator | 2026-03-11 00:36:47.711079 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-11 00:36:47.711090 | orchestrator | Wednesday 11 March 2026 00:36:44 +0000 (0:00:01.838) 0:00:41.581 ******* 2026-03-11 00:36:47.711101 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-11 00:36:47.711113 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-11 00:36:47.711123 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-11 00:36:47.711134 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-11 00:36:47.711145 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:36:47.711157 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-11 00:36:47.711168 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-11 00:36:47.711178 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-11 00:36:47.711189 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-11 00:36:47.711200 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:36:47.711211 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-11 00:36:47.711222 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-11 00:36:47.711232 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-11 00:36:47.711244 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-11 00:36:47.711279 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:36:47.711293 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-11 00:36:47.711305 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-11 00:36:47.711316 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-11 00:36:47.711328 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-11 00:36:47.711340 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:36:47.711367 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-11 00:36:47.711380 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-11 00:36:47.711392 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-11 00:36:47.711404 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-11 00:36:47.711415 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:36:47.711427 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-11 00:36:47.711440 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-11 00:36:47.711451 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-11 00:36:47.711464 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-11 00:36:47.711476 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:36:47.711488 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-11 00:36:47.711500 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-11 00:36:47.711511 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-11 00:36:47.711523 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-11 00:36:47.711535 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:36:47.711547 | orchestrator | 2026-03-11 00:36:47.711560 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-11 00:36:47.711591 | orchestrator | Wednesday 11 March 2026 00:36:46 +0000 (0:00:01.491) 0:00:43.073 ******* 2026-03-11 00:36:47.711605 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:36:47.711618 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:36:47.711630 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:36:47.711640 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:36:47.711651 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:36:47.711662 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:36:47.711672 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:36:47.711683 | orchestrator | 2026-03-11 00:36:47.711694 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-11 00:36:47.711733 | orchestrator | Wednesday 11 March 2026 00:36:46 +0000 (0:00:00.482) 0:00:43.555 ******* 2026-03-11 00:36:47.711745 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:36:47.711755 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:36:47.711766 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:36:47.711776 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:36:47.711788 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:36:47.711798 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:36:47.711809 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:36:47.711819 | orchestrator | 2026-03-11 00:36:47.711830 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:36:47.711842 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 00:36:47.711854 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 00:36:47.711874 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 00:36:47.711885 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 00:36:47.711896 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 00:36:47.711907 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 00:36:47.711918 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 00:36:47.711928 | orchestrator | 2026-03-11 00:36:47.711939 | orchestrator | 2026-03-11 00:36:47.711950 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:36:47.711961 | orchestrator | Wednesday 11 March 2026 00:36:47 +0000 (0:00:00.545) 0:00:44.101 ******* 2026-03-11 00:36:47.711971 | orchestrator | =============================================================================== 2026-03-11 00:36:47.711982 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.42s 2026-03-11 00:36:47.711993 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.29s 2026-03-11 00:36:47.712003 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.33s 2026-03-11 00:36:47.712014 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.42s 2026-03-11 00:36:47.712025 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.10s 2026-03-11 00:36:47.712035 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.05s 2026-03-11 00:36:47.712046 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 2.00s 2026-03-11 00:36:47.712057 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.84s 2026-03-11 00:36:47.712073 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.53s 2026-03-11 00:36:47.712084 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.49s 2026-03-11 00:36:47.712095 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.43s 2026-03-11 00:36:47.712105 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.42s 2026-03-11 00:36:47.712116 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2026-03-11 00:36:47.712127 | orchestrator | osism.commons.network : Create required directories --------------------- 1.08s 2026-03-11 00:36:47.712137 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.05s 2026-03-11 00:36:47.712148 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.04s 2026-03-11 00:36:47.712158 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 0.98s 2026-03-11 00:36:47.712169 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2026-03-11 00:36:47.712180 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 0.92s 2026-03-11 00:36:47.712190 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.81s 2026-03-11 00:36:47.912434 | orchestrator | + osism apply wireguard 2026-03-11 00:36:59.685161 | orchestrator | 2026-03-11 00:36:59 | INFO  | Task 42266346-989c-44ee-937c-052e8e7eced6 (wireguard) was prepared for execution. 2026-03-11 00:36:59.685260 | orchestrator | 2026-03-11 00:36:59 | INFO  | It takes a moment until task 42266346-989c-44ee-937c-052e8e7eced6 (wireguard) has been started and output is visible here. 2026-03-11 00:37:16.506457 | orchestrator | 2026-03-11 00:37:16.506598 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-11 00:37:16.506722 | orchestrator | 2026-03-11 00:37:16.506747 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-11 00:37:16.506760 | orchestrator | Wednesday 11 March 2026 00:37:03 +0000 (0:00:00.163) 0:00:00.163 ******* 2026-03-11 00:37:16.506771 | orchestrator | ok: [testbed-manager] 2026-03-11 00:37:16.506786 | orchestrator | 2026-03-11 00:37:16.506805 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-11 00:37:16.506824 | orchestrator | Wednesday 11 March 2026 00:37:04 +0000 (0:00:01.187) 0:00:01.350 ******* 2026-03-11 00:37:16.506843 | orchestrator | changed: [testbed-manager] 2026-03-11 00:37:16.506863 | orchestrator | 2026-03-11 00:37:16.506887 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-11 00:37:16.506899 | orchestrator | Wednesday 11 March 2026 00:37:09 +0000 (0:00:05.100) 0:00:06.451 ******* 2026-03-11 00:37:16.506910 | orchestrator | changed: [testbed-manager] 2026-03-11 00:37:16.506921 | orchestrator | 2026-03-11 00:37:16.506932 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-11 00:37:16.506943 | orchestrator | Wednesday 11 March 2026 00:37:09 +0000 (0:00:00.493) 0:00:06.944 ******* 2026-03-11 00:37:16.506953 | orchestrator | changed: [testbed-manager] 2026-03-11 00:37:16.506965 | orchestrator | 2026-03-11 00:37:16.506977 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-11 00:37:16.506990 | orchestrator | Wednesday 11 March 2026 00:37:10 +0000 (0:00:00.379) 0:00:07.323 ******* 2026-03-11 00:37:16.507002 | orchestrator | ok: [testbed-manager] 2026-03-11 00:37:16.507017 | orchestrator | 2026-03-11 00:37:16.507036 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-11 00:37:16.507056 | orchestrator | Wednesday 11 March 2026 00:37:10 +0000 (0:00:00.555) 0:00:07.879 ******* 2026-03-11 00:37:16.507075 | orchestrator | ok: [testbed-manager] 2026-03-11 00:37:16.507092 | orchestrator | 2026-03-11 00:37:16.507109 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-11 00:37:16.507129 | orchestrator | Wednesday 11 March 2026 00:37:11 +0000 (0:00:00.385) 0:00:08.265 ******* 2026-03-11 00:37:16.507148 | orchestrator | ok: [testbed-manager] 2026-03-11 00:37:16.507169 | orchestrator | 2026-03-11 00:37:16.507188 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-11 00:37:16.507206 | orchestrator | Wednesday 11 March 2026 00:37:11 +0000 (0:00:00.377) 0:00:08.642 ******* 2026-03-11 00:37:16.507222 | orchestrator | changed: [testbed-manager] 2026-03-11 00:37:16.507233 | orchestrator | 2026-03-11 00:37:16.507243 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-11 00:37:16.507254 | orchestrator | Wednesday 11 March 2026 00:37:12 +0000 (0:00:01.109) 0:00:09.752 ******* 2026-03-11 00:37:16.507265 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-11 00:37:16.507277 | orchestrator | changed: [testbed-manager] 2026-03-11 00:37:16.507288 | orchestrator | 2026-03-11 00:37:16.507299 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-11 00:37:16.507310 | orchestrator | Wednesday 11 March 2026 00:37:13 +0000 (0:00:00.907) 0:00:10.659 ******* 2026-03-11 00:37:16.507320 | orchestrator | changed: [testbed-manager] 2026-03-11 00:37:16.507331 | orchestrator | 2026-03-11 00:37:16.507343 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-11 00:37:16.507354 | orchestrator | Wednesday 11 March 2026 00:37:15 +0000 (0:00:01.601) 0:00:12.260 ******* 2026-03-11 00:37:16.507365 | orchestrator | changed: [testbed-manager] 2026-03-11 00:37:16.507375 | orchestrator | 2026-03-11 00:37:16.507386 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:37:16.507398 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:37:16.507410 | orchestrator | 2026-03-11 00:37:16.507421 | orchestrator | 2026-03-11 00:37:16.507431 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:37:16.507452 | orchestrator | Wednesday 11 March 2026 00:37:16 +0000 (0:00:00.899) 0:00:13.159 ******* 2026-03-11 00:37:16.507463 | orchestrator | =============================================================================== 2026-03-11 00:37:16.507475 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.10s 2026-03-11 00:37:16.507485 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.60s 2026-03-11 00:37:16.507496 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.19s 2026-03-11 00:37:16.507507 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.11s 2026-03-11 00:37:16.507521 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2026-03-11 00:37:16.507540 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2026-03-11 00:37:16.507558 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.56s 2026-03-11 00:37:16.507576 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.49s 2026-03-11 00:37:16.507592 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.39s 2026-03-11 00:37:16.507603 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.38s 2026-03-11 00:37:16.507614 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.38s 2026-03-11 00:37:16.782720 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-11 00:37:16.811214 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-11 00:37:16.811293 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-11 00:37:16.887628 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 196 0 --:--:-- --:--:-- --:--:-- 197 2026-03-11 00:37:16.901215 | orchestrator | + osism apply --environment custom workarounds 2026-03-11 00:37:18.761733 | orchestrator | 2026-03-11 00:37:18 | INFO  | Trying to run play workarounds in environment custom 2026-03-11 00:37:28.898999 | orchestrator | 2026-03-11 00:37:28 | INFO  | Task 286e7b9d-3c7e-44e5-b1db-55b3677fad7b (workarounds) was prepared for execution. 2026-03-11 00:37:28.899100 | orchestrator | 2026-03-11 00:37:28 | INFO  | It takes a moment until task 286e7b9d-3c7e-44e5-b1db-55b3677fad7b (workarounds) has been started and output is visible here. 2026-03-11 00:37:52.463893 | orchestrator | 2026-03-11 00:37:52.463971 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:37:52.463978 | orchestrator | 2026-03-11 00:37:52.463983 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-11 00:37:52.463987 | orchestrator | Wednesday 11 March 2026 00:37:32 +0000 (0:00:00.117) 0:00:00.117 ******* 2026-03-11 00:37:52.463993 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-11 00:37:52.463997 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-11 00:37:52.464001 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-11 00:37:52.464006 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-11 00:37:52.464009 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-11 00:37:52.464013 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-11 00:37:52.464017 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-11 00:37:52.464021 | orchestrator | 2026-03-11 00:37:52.464025 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-11 00:37:52.464028 | orchestrator | 2026-03-11 00:37:52.464032 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-11 00:37:52.464036 | orchestrator | Wednesday 11 March 2026 00:37:33 +0000 (0:00:00.561) 0:00:00.679 ******* 2026-03-11 00:37:52.464040 | orchestrator | ok: [testbed-manager] 2026-03-11 00:37:52.464045 | orchestrator | 2026-03-11 00:37:52.464064 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-11 00:37:52.464068 | orchestrator | 2026-03-11 00:37:52.464072 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-11 00:37:52.464076 | orchestrator | Wednesday 11 March 2026 00:37:35 +0000 (0:00:02.017) 0:00:02.697 ******* 2026-03-11 00:37:52.464080 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:37:52.464084 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:37:52.464088 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:37:52.464091 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:37:52.464095 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:37:52.464099 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:37:52.464103 | orchestrator | 2026-03-11 00:37:52.464106 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-11 00:37:52.464110 | orchestrator | 2026-03-11 00:37:52.464114 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-11 00:37:52.464118 | orchestrator | Wednesday 11 March 2026 00:37:37 +0000 (0:00:01.784) 0:00:04.481 ******* 2026-03-11 00:37:52.464122 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-11 00:37:52.464127 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-11 00:37:52.464131 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-11 00:37:52.464135 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-11 00:37:52.464138 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-11 00:37:52.464153 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-11 00:37:52.464157 | orchestrator | 2026-03-11 00:37:52.464161 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-11 00:37:52.464165 | orchestrator | Wednesday 11 March 2026 00:37:38 +0000 (0:00:01.381) 0:00:05.863 ******* 2026-03-11 00:37:52.464169 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:37:52.464173 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:37:52.464177 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:37:52.464180 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:37:52.464184 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:37:52.464188 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:37:52.464192 | orchestrator | 2026-03-11 00:37:52.464195 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-11 00:37:52.464199 | orchestrator | Wednesday 11 March 2026 00:37:41 +0000 (0:00:03.373) 0:00:09.237 ******* 2026-03-11 00:37:52.464203 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:37:52.464207 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:37:52.464211 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:37:52.464215 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:37:52.464218 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:37:52.464222 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:37:52.464226 | orchestrator | 2026-03-11 00:37:52.464230 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-11 00:37:52.464233 | orchestrator | 2026-03-11 00:37:52.464237 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-11 00:37:52.464241 | orchestrator | Wednesday 11 March 2026 00:37:42 +0000 (0:00:00.633) 0:00:09.870 ******* 2026-03-11 00:37:52.464245 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:37:52.464248 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:37:52.464252 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:37:52.464256 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:37:52.464260 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:37:52.464263 | orchestrator | changed: [testbed-manager] 2026-03-11 00:37:52.464267 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:37:52.464275 | orchestrator | 2026-03-11 00:37:52.464279 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-11 00:37:52.464283 | orchestrator | Wednesday 11 March 2026 00:37:44 +0000 (0:00:01.431) 0:00:11.302 ******* 2026-03-11 00:37:52.464287 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:37:52.464290 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:37:52.464294 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:37:52.464298 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:37:52.464302 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:37:52.464305 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:37:52.464321 | orchestrator | changed: [testbed-manager] 2026-03-11 00:37:52.464325 | orchestrator | 2026-03-11 00:37:52.464329 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-11 00:37:52.464332 | orchestrator | Wednesday 11 March 2026 00:37:45 +0000 (0:00:01.502) 0:00:12.804 ******* 2026-03-11 00:37:52.464336 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:37:52.464340 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:37:52.464344 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:37:52.464348 | orchestrator | ok: [testbed-manager] 2026-03-11 00:37:52.464351 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:37:52.464355 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:37:52.464359 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:37:52.464362 | orchestrator | 2026-03-11 00:37:52.464366 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-11 00:37:52.464370 | orchestrator | Wednesday 11 March 2026 00:37:47 +0000 (0:00:01.523) 0:00:14.328 ******* 2026-03-11 00:37:52.464374 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:37:52.464378 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:37:52.464381 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:37:52.464385 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:37:52.464389 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:37:52.464393 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:37:52.464396 | orchestrator | changed: [testbed-manager] 2026-03-11 00:37:52.464400 | orchestrator | 2026-03-11 00:37:52.464404 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-11 00:37:52.464408 | orchestrator | Wednesday 11 March 2026 00:37:48 +0000 (0:00:01.796) 0:00:16.124 ******* 2026-03-11 00:37:52.464411 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:37:52.464415 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:37:52.464419 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:37:52.464422 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:37:52.464426 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:37:52.464430 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:37:52.464434 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:37:52.464437 | orchestrator | 2026-03-11 00:37:52.464441 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-11 00:37:52.464445 | orchestrator | 2026-03-11 00:37:52.464449 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-11 00:37:52.464453 | orchestrator | Wednesday 11 March 2026 00:37:49 +0000 (0:00:00.619) 0:00:16.744 ******* 2026-03-11 00:37:52.464457 | orchestrator | ok: [testbed-manager] 2026-03-11 00:37:52.464461 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:37:52.464465 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:37:52.464469 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:37:52.464474 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:37:52.464478 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:37:52.464482 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:37:52.464486 | orchestrator | 2026-03-11 00:37:52.464491 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:37:52.464496 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:37:52.464502 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:52.464510 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:52.464518 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:52.464523 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:52.464527 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:52.464532 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:52.464536 | orchestrator | 2026-03-11 00:37:52.464540 | orchestrator | 2026-03-11 00:37:52.464544 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:37:52.464549 | orchestrator | Wednesday 11 March 2026 00:37:52 +0000 (0:00:02.919) 0:00:19.663 ******* 2026-03-11 00:37:52.464553 | orchestrator | =============================================================================== 2026-03-11 00:37:52.464574 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.37s 2026-03-11 00:37:52.464580 | orchestrator | Install python3-docker -------------------------------------------------- 2.92s 2026-03-11 00:37:52.464584 | orchestrator | Apply netplan configuration --------------------------------------------- 2.02s 2026-03-11 00:37:52.464589 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.80s 2026-03-11 00:37:52.464593 | orchestrator | Apply netplan configuration --------------------------------------------- 1.78s 2026-03-11 00:37:52.464597 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.52s 2026-03-11 00:37:52.464602 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.50s 2026-03-11 00:37:52.464606 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.43s 2026-03-11 00:37:52.464610 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.38s 2026-03-11 00:37:52.464614 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.63s 2026-03-11 00:37:52.464619 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.62s 2026-03-11 00:37:52.464626 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.56s 2026-03-11 00:37:52.868436 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-11 00:38:04.616750 | orchestrator | 2026-03-11 00:38:04 | INFO  | Task 8ef0d8db-1f34-467b-ad85-3691720c5c5a (reboot) was prepared for execution. 2026-03-11 00:38:04.616865 | orchestrator | 2026-03-11 00:38:04 | INFO  | It takes a moment until task 8ef0d8db-1f34-467b-ad85-3691720c5c5a (reboot) has been started and output is visible here. 2026-03-11 00:38:13.984150 | orchestrator | 2026-03-11 00:38:13.984255 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-11 00:38:13.984279 | orchestrator | 2026-03-11 00:38:13.984298 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-11 00:38:13.984315 | orchestrator | Wednesday 11 March 2026 00:38:08 +0000 (0:00:00.179) 0:00:00.179 ******* 2026-03-11 00:38:13.984331 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:38:13.984348 | orchestrator | 2026-03-11 00:38:13.984364 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-11 00:38:13.984380 | orchestrator | Wednesday 11 March 2026 00:38:08 +0000 (0:00:00.090) 0:00:00.269 ******* 2026-03-11 00:38:13.984395 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:38:13.984412 | orchestrator | 2026-03-11 00:38:13.984428 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-11 00:38:13.984474 | orchestrator | Wednesday 11 March 2026 00:38:09 +0000 (0:00:00.856) 0:00:01.126 ******* 2026-03-11 00:38:13.984491 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:38:13.984507 | orchestrator | 2026-03-11 00:38:13.984566 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-11 00:38:13.984582 | orchestrator | 2026-03-11 00:38:13.984598 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-11 00:38:13.984615 | orchestrator | Wednesday 11 March 2026 00:38:09 +0000 (0:00:00.102) 0:00:01.229 ******* 2026-03-11 00:38:13.984631 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:38:13.984647 | orchestrator | 2026-03-11 00:38:13.984663 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-11 00:38:13.984678 | orchestrator | Wednesday 11 March 2026 00:38:09 +0000 (0:00:00.084) 0:00:01.313 ******* 2026-03-11 00:38:13.984695 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:38:13.984712 | orchestrator | 2026-03-11 00:38:13.984729 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-11 00:38:13.984750 | orchestrator | Wednesday 11 March 2026 00:38:10 +0000 (0:00:00.670) 0:00:01.983 ******* 2026-03-11 00:38:13.984771 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:38:13.984789 | orchestrator | 2026-03-11 00:38:13.984810 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-11 00:38:13.984831 | orchestrator | 2026-03-11 00:38:13.984848 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-11 00:38:13.984866 | orchestrator | Wednesday 11 March 2026 00:38:10 +0000 (0:00:00.087) 0:00:02.071 ******* 2026-03-11 00:38:13.984887 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:38:13.984905 | orchestrator | 2026-03-11 00:38:13.984921 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-11 00:38:13.984938 | orchestrator | Wednesday 11 March 2026 00:38:10 +0000 (0:00:00.152) 0:00:02.223 ******* 2026-03-11 00:38:13.984954 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:38:13.984971 | orchestrator | 2026-03-11 00:38:13.984987 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-11 00:38:13.985044 | orchestrator | Wednesday 11 March 2026 00:38:11 +0000 (0:00:00.669) 0:00:02.893 ******* 2026-03-11 00:38:13.985062 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:38:13.985078 | orchestrator | 2026-03-11 00:38:13.985095 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-11 00:38:13.985112 | orchestrator | 2026-03-11 00:38:13.985129 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-11 00:38:13.985146 | orchestrator | Wednesday 11 March 2026 00:38:11 +0000 (0:00:00.110) 0:00:03.004 ******* 2026-03-11 00:38:13.985162 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:38:13.985179 | orchestrator | 2026-03-11 00:38:13.985195 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-11 00:38:13.985211 | orchestrator | Wednesday 11 March 2026 00:38:11 +0000 (0:00:00.094) 0:00:03.098 ******* 2026-03-11 00:38:13.985227 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:38:13.985244 | orchestrator | 2026-03-11 00:38:13.985261 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-11 00:38:13.985278 | orchestrator | Wednesday 11 March 2026 00:38:11 +0000 (0:00:00.670) 0:00:03.768 ******* 2026-03-11 00:38:13.985295 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:38:13.985312 | orchestrator | 2026-03-11 00:38:13.985329 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-11 00:38:13.985345 | orchestrator | 2026-03-11 00:38:13.985362 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-11 00:38:13.985380 | orchestrator | Wednesday 11 March 2026 00:38:12 +0000 (0:00:00.102) 0:00:03.871 ******* 2026-03-11 00:38:13.985397 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:38:13.985414 | orchestrator | 2026-03-11 00:38:13.985431 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-11 00:38:13.985448 | orchestrator | Wednesday 11 March 2026 00:38:12 +0000 (0:00:00.106) 0:00:03.977 ******* 2026-03-11 00:38:13.985478 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:38:13.985495 | orchestrator | 2026-03-11 00:38:13.985532 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-11 00:38:13.985552 | orchestrator | Wednesday 11 March 2026 00:38:12 +0000 (0:00:00.666) 0:00:04.643 ******* 2026-03-11 00:38:13.985569 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:38:13.985585 | orchestrator | 2026-03-11 00:38:13.985602 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-11 00:38:13.985619 | orchestrator | 2026-03-11 00:38:13.985635 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-11 00:38:13.985650 | orchestrator | Wednesday 11 March 2026 00:38:12 +0000 (0:00:00.117) 0:00:04.760 ******* 2026-03-11 00:38:13.985666 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:38:13.985683 | orchestrator | 2026-03-11 00:38:13.985699 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-11 00:38:13.985715 | orchestrator | Wednesday 11 March 2026 00:38:13 +0000 (0:00:00.097) 0:00:04.858 ******* 2026-03-11 00:38:13.985732 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:38:13.985748 | orchestrator | 2026-03-11 00:38:13.985764 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-11 00:38:13.985780 | orchestrator | Wednesday 11 March 2026 00:38:13 +0000 (0:00:00.701) 0:00:05.559 ******* 2026-03-11 00:38:13.985820 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:38:13.985836 | orchestrator | 2026-03-11 00:38:13.985852 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:38:13.985870 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:38:13.985887 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:38:13.985903 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:38:13.985919 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:38:13.985935 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:38:13.985951 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:38:13.985967 | orchestrator | 2026-03-11 00:38:13.985984 | orchestrator | 2026-03-11 00:38:13.985999 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:38:13.986077 | orchestrator | Wednesday 11 March 2026 00:38:13 +0000 (0:00:00.032) 0:00:05.592 ******* 2026-03-11 00:38:13.986098 | orchestrator | =============================================================================== 2026-03-11 00:38:13.986115 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.23s 2026-03-11 00:38:13.986132 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.63s 2026-03-11 00:38:13.986148 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.55s 2026-03-11 00:38:14.213072 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-11 00:38:26.068374 | orchestrator | 2026-03-11 00:38:26 | INFO  | Task 006b606c-ae7f-4619-838c-00eac2ba4c42 (wait-for-connection) was prepared for execution. 2026-03-11 00:38:26.068513 | orchestrator | 2026-03-11 00:38:26 | INFO  | It takes a moment until task 006b606c-ae7f-4619-838c-00eac2ba4c42 (wait-for-connection) has been started and output is visible here. 2026-03-11 00:38:42.056416 | orchestrator | 2026-03-11 00:38:42.056611 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-11 00:38:42.056631 | orchestrator | 2026-03-11 00:38:42.056644 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-11 00:38:42.056657 | orchestrator | Wednesday 11 March 2026 00:38:30 +0000 (0:00:00.239) 0:00:00.239 ******* 2026-03-11 00:38:42.056668 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:38:42.056681 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:38:42.056693 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:38:42.056704 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:38:42.056715 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:38:42.056727 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:38:42.056739 | orchestrator | 2026-03-11 00:38:42.056751 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:38:42.056763 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:38:42.056776 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:38:42.056787 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:38:42.056799 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:38:42.056810 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:38:42.056822 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:38:42.056833 | orchestrator | 2026-03-11 00:38:42.056845 | orchestrator | 2026-03-11 00:38:42.056856 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:38:42.056868 | orchestrator | Wednesday 11 March 2026 00:38:41 +0000 (0:00:11.591) 0:00:11.830 ******* 2026-03-11 00:38:42.056879 | orchestrator | =============================================================================== 2026-03-11 00:38:42.056890 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.59s 2026-03-11 00:38:42.260277 | orchestrator | + osism apply hddtemp 2026-03-11 00:38:54.135671 | orchestrator | 2026-03-11 00:38:54 | INFO  | Task 9ca0c1b6-a6be-45aa-ae58-b73855ed3092 (hddtemp) was prepared for execution. 2026-03-11 00:38:54.135772 | orchestrator | 2026-03-11 00:38:54 | INFO  | It takes a moment until task 9ca0c1b6-a6be-45aa-ae58-b73855ed3092 (hddtemp) has been started and output is visible here. 2026-03-11 00:39:23.723011 | orchestrator | 2026-03-11 00:39:23.723125 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-11 00:39:23.723143 | orchestrator | 2026-03-11 00:39:23.723156 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-11 00:39:23.723168 | orchestrator | Wednesday 11 March 2026 00:38:58 +0000 (0:00:00.266) 0:00:00.266 ******* 2026-03-11 00:39:23.723180 | orchestrator | ok: [testbed-manager] 2026-03-11 00:39:23.723192 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:39:23.723203 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:39:23.723214 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:39:23.723225 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:39:23.723236 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:39:23.723247 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:39:23.723257 | orchestrator | 2026-03-11 00:39:23.723269 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-11 00:39:23.723280 | orchestrator | Wednesday 11 March 2026 00:38:59 +0000 (0:00:00.815) 0:00:01.082 ******* 2026-03-11 00:39:23.723294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:39:23.723333 | orchestrator | 2026-03-11 00:39:23.723345 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-11 00:39:23.723356 | orchestrator | Wednesday 11 March 2026 00:39:00 +0000 (0:00:01.199) 0:00:02.281 ******* 2026-03-11 00:39:23.723368 | orchestrator | ok: [testbed-manager] 2026-03-11 00:39:23.723428 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:39:23.723441 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:39:23.723452 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:39:23.723462 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:39:23.723474 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:39:23.723485 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:39:23.723496 | orchestrator | 2026-03-11 00:39:23.723507 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-11 00:39:23.723518 | orchestrator | Wednesday 11 March 2026 00:39:02 +0000 (0:00:02.061) 0:00:04.343 ******* 2026-03-11 00:39:23.723528 | orchestrator | changed: [testbed-manager] 2026-03-11 00:39:23.723540 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:39:23.723551 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:39:23.723562 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:39:23.723573 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:39:23.723583 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:39:23.723594 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:39:23.723605 | orchestrator | 2026-03-11 00:39:23.723616 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-11 00:39:23.723626 | orchestrator | Wednesday 11 March 2026 00:39:03 +0000 (0:00:01.138) 0:00:05.482 ******* 2026-03-11 00:39:23.723637 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:39:23.723648 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:39:23.723659 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:39:23.723670 | orchestrator | ok: [testbed-manager] 2026-03-11 00:39:23.723681 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:39:23.723691 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:39:23.723717 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:39:23.723728 | orchestrator | 2026-03-11 00:39:23.723739 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-11 00:39:23.723750 | orchestrator | Wednesday 11 March 2026 00:39:05 +0000 (0:00:02.144) 0:00:07.626 ******* 2026-03-11 00:39:23.723761 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:39:23.723772 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:39:23.723783 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:39:23.723794 | orchestrator | changed: [testbed-manager] 2026-03-11 00:39:23.723804 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:39:23.723815 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:39:23.723826 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:39:23.723836 | orchestrator | 2026-03-11 00:39:23.723847 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-11 00:39:23.723858 | orchestrator | Wednesday 11 March 2026 00:39:06 +0000 (0:00:00.707) 0:00:08.334 ******* 2026-03-11 00:39:23.723869 | orchestrator | changed: [testbed-manager] 2026-03-11 00:39:23.723879 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:39:23.723890 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:39:23.723901 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:39:23.723911 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:39:23.723922 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:39:23.723933 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:39:23.723943 | orchestrator | 2026-03-11 00:39:23.723954 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-11 00:39:23.723965 | orchestrator | Wednesday 11 March 2026 00:39:20 +0000 (0:00:14.021) 0:00:22.356 ******* 2026-03-11 00:39:23.723976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:39:23.723998 | orchestrator | 2026-03-11 00:39:23.724009 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-11 00:39:23.724020 | orchestrator | Wednesday 11 March 2026 00:39:21 +0000 (0:00:01.181) 0:00:23.538 ******* 2026-03-11 00:39:23.724031 | orchestrator | changed: [testbed-manager] 2026-03-11 00:39:23.724042 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:39:23.724053 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:39:23.724064 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:39:23.724074 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:39:23.724085 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:39:23.724096 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:39:23.724107 | orchestrator | 2026-03-11 00:39:23.724117 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:39:23.724128 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:39:23.724158 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:39:23.724170 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:39:23.724181 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:39:23.724192 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:39:23.724203 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:39:23.724214 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:39:23.724224 | orchestrator | 2026-03-11 00:39:23.724235 | orchestrator | 2026-03-11 00:39:23.724246 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:39:23.724257 | orchestrator | Wednesday 11 March 2026 00:39:23 +0000 (0:00:01.896) 0:00:25.434 ******* 2026-03-11 00:39:23.724268 | orchestrator | =============================================================================== 2026-03-11 00:39:23.724278 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.02s 2026-03-11 00:39:23.724289 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.14s 2026-03-11 00:39:23.724300 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.06s 2026-03-11 00:39:23.724311 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.90s 2026-03-11 00:39:23.724322 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.20s 2026-03-11 00:39:23.724332 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.18s 2026-03-11 00:39:23.724343 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.14s 2026-03-11 00:39:23.724354 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.82s 2026-03-11 00:39:23.724365 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.71s 2026-03-11 00:39:24.000596 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-11 00:39:24.053881 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-11 00:39:24.053971 | orchestrator | + sudo systemctl restart manager.service 2026-03-11 00:39:40.965841 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-11 00:39:40.965915 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-11 00:39:40.965922 | orchestrator | + local max_attempts=60 2026-03-11 00:39:40.965939 | orchestrator | + local name=ceph-ansible 2026-03-11 00:39:40.965943 | orchestrator | + local attempt_num=1 2026-03-11 00:39:40.965948 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:40.996917 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:40.996985 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:40.996991 | orchestrator | + sleep 5 2026-03-11 00:39:46.001546 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:46.033725 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:46.033818 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:46.033833 | orchestrator | + sleep 5 2026-03-11 00:39:51.036769 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:51.076029 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:51.076142 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:51.076166 | orchestrator | + sleep 5 2026-03-11 00:39:56.079836 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:56.115495 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:56.115568 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:56.115577 | orchestrator | + sleep 5 2026-03-11 00:40:01.120408 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:40:01.160199 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:40:01.160291 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:40:01.160305 | orchestrator | + sleep 5 2026-03-11 00:40:06.164232 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:40:06.202271 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:40:06.202379 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:40:06.202395 | orchestrator | + sleep 5 2026-03-11 00:40:11.206995 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:40:11.246279 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:40:11.246422 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:40:11.246436 | orchestrator | + sleep 5 2026-03-11 00:40:16.248616 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:40:16.272814 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-11 00:40:16.272903 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:40:16.272918 | orchestrator | + sleep 5 2026-03-11 00:40:21.278387 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:40:21.312221 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-11 00:40:21.312367 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:40:21.312383 | orchestrator | + sleep 5 2026-03-11 00:40:26.315762 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:40:26.351368 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-11 00:40:26.351471 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:40:26.351488 | orchestrator | + sleep 5 2026-03-11 00:40:31.355185 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:40:31.440991 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-11 00:40:31.441064 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:40:31.441073 | orchestrator | + sleep 5 2026-03-11 00:40:36.441924 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:40:36.473485 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-11 00:40:36.473576 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:40:36.473591 | orchestrator | + sleep 5 2026-03-11 00:40:41.477635 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:40:41.510468 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-11 00:40:41.510558 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:40:41.510572 | orchestrator | + sleep 5 2026-03-11 00:40:46.514368 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:40:46.547010 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:40:46.547101 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-11 00:40:46.547115 | orchestrator | + local max_attempts=60 2026-03-11 00:40:46.547127 | orchestrator | + local name=kolla-ansible 2026-03-11 00:40:46.547139 | orchestrator | + local attempt_num=1 2026-03-11 00:40:46.547831 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-11 00:40:46.582318 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:40:46.582386 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-11 00:40:46.582415 | orchestrator | + local max_attempts=60 2026-03-11 00:40:46.582425 | orchestrator | + local name=osism-ansible 2026-03-11 00:40:46.582433 | orchestrator | + local attempt_num=1 2026-03-11 00:40:46.582567 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-11 00:40:46.609282 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:40:46.609378 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-11 00:40:46.609393 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-11 00:40:46.771450 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-11 00:40:46.903207 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-11 00:40:47.027071 | orchestrator | ARA in osism-ansible already disabled. 2026-03-11 00:40:47.151947 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-11 00:40:47.152033 | orchestrator | + osism apply gather-facts 2026-03-11 00:40:59.136087 | orchestrator | 2026-03-11 00:40:59 | INFO  | Task 04ba1a36-226b-40cf-a9ee-cd6743b0e42b (gather-facts) was prepared for execution. 2026-03-11 00:40:59.136199 | orchestrator | 2026-03-11 00:40:59 | INFO  | It takes a moment until task 04ba1a36-226b-40cf-a9ee-cd6743b0e42b (gather-facts) has been started and output is visible here. 2026-03-11 00:41:12.513642 | orchestrator | 2026-03-11 00:41:12.513750 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-11 00:41:12.513766 | orchestrator | 2026-03-11 00:41:12.513779 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-11 00:41:12.513790 | orchestrator | Wednesday 11 March 2026 00:41:03 +0000 (0:00:00.196) 0:00:00.196 ******* 2026-03-11 00:41:12.513802 | orchestrator | ok: [testbed-manager] 2026-03-11 00:41:12.513814 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:41:12.513826 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:41:12.513837 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:41:12.513848 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:41:12.513859 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:41:12.513870 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:41:12.513881 | orchestrator | 2026-03-11 00:41:12.513892 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-11 00:41:12.513903 | orchestrator | 2026-03-11 00:41:12.513914 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-11 00:41:12.513925 | orchestrator | Wednesday 11 March 2026 00:41:11 +0000 (0:00:08.488) 0:00:08.685 ******* 2026-03-11 00:41:12.513936 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:41:12.513947 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:41:12.513959 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:41:12.513969 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:41:12.513981 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:12.513992 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:12.514003 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:41:12.514013 | orchestrator | 2026-03-11 00:41:12.514083 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:41:12.514094 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:41:12.514107 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:41:12.514118 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:41:12.514129 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:41:12.514140 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:41:12.514151 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:41:12.514162 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:41:12.514221 | orchestrator | 2026-03-11 00:41:12.514235 | orchestrator | 2026-03-11 00:41:12.514248 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:41:12.514260 | orchestrator | Wednesday 11 March 2026 00:41:12 +0000 (0:00:00.497) 0:00:09.182 ******* 2026-03-11 00:41:12.514273 | orchestrator | =============================================================================== 2026-03-11 00:41:12.514285 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.49s 2026-03-11 00:41:12.514298 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-03-11 00:41:12.740549 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-11 00:41:12.749894 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-11 00:41:12.762295 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-11 00:41:12.771504 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-11 00:41:12.781155 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-11 00:41:12.795647 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-11 00:41:12.805584 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-11 00:41:12.817498 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-11 00:41:12.828758 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-11 00:41:12.841475 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-11 00:41:12.862098 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-11 00:41:12.880020 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-11 00:41:12.895709 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-11 00:41:12.911947 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-11 00:41:12.928068 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-11 00:41:12.939026 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-11 00:41:12.955817 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-11 00:41:12.970567 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-11 00:41:12.985281 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-11 00:41:13.002360 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-11 00:41:13.019743 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-11 00:41:13.038342 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-11 00:41:13.057114 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-11 00:41:13.071694 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-11 00:41:13.187959 | orchestrator | ok: Runtime: 0:23:29.718059 2026-03-11 00:41:13.290963 | 2026-03-11 00:41:13.291102 | TASK [Deploy services] 2026-03-11 00:41:13.823874 | orchestrator | skipping: Conditional result was False 2026-03-11 00:41:13.844841 | 2026-03-11 00:41:13.845007 | TASK [Deploy in a nutshell] 2026-03-11 00:41:14.524503 | orchestrator | + set -e 2026-03-11 00:41:14.524680 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-11 00:41:14.524700 | orchestrator | ++ export INTERACTIVE=false 2026-03-11 00:41:14.524718 | orchestrator | ++ INTERACTIVE=false 2026-03-11 00:41:14.524730 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-11 00:41:14.524741 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-11 00:41:14.524753 | orchestrator | + source /opt/manager-vars.sh 2026-03-11 00:41:14.524789 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-11 00:41:14.524813 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-11 00:41:14.524825 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-11 00:41:14.524839 | orchestrator | ++ CEPH_VERSION=reef 2026-03-11 00:41:14.524849 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-11 00:41:14.524864 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-11 00:41:14.524873 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-11 00:41:14.524890 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-11 00:41:14.524899 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-11 00:41:14.524910 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-11 00:41:14.524919 | orchestrator | ++ export ARA=false 2026-03-11 00:41:14.524928 | orchestrator | ++ ARA=false 2026-03-11 00:41:14.524937 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-11 00:41:14.524947 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-11 00:41:14.524956 | orchestrator | ++ export TEMPEST=true 2026-03-11 00:41:14.524964 | orchestrator | ++ TEMPEST=true 2026-03-11 00:41:14.524973 | orchestrator | ++ export IS_ZUUL=true 2026-03-11 00:41:14.524982 | orchestrator | ++ IS_ZUUL=true 2026-03-11 00:41:14.524991 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.101 2026-03-11 00:41:14.525000 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.101 2026-03-11 00:41:14.525009 | orchestrator | ++ export EXTERNAL_API=false 2026-03-11 00:41:14.525017 | orchestrator | ++ EXTERNAL_API=false 2026-03-11 00:41:14.525026 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-11 00:41:14.525035 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-11 00:41:14.525043 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-11 00:41:14.525052 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-11 00:41:14.525061 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-11 00:41:14.525070 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-11 00:41:14.525079 | orchestrator | + echo 2026-03-11 00:41:14.525088 | orchestrator | 2026-03-11 00:41:14.525098 | orchestrator | # PULL IMAGES 2026-03-11 00:41:14.525106 | orchestrator | 2026-03-11 00:41:14.525115 | orchestrator | + echo '# PULL IMAGES' 2026-03-11 00:41:14.525124 | orchestrator | + echo 2026-03-11 00:41:14.525145 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-11 00:41:14.576749 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-11 00:41:14.576864 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-11 00:41:16.112945 | orchestrator | 2026-03-11 00:41:16 | INFO  | Trying to run play pull-images in environment custom 2026-03-11 00:41:26.296356 | orchestrator | 2026-03-11 00:41:26 | INFO  | Task 15f096a6-1f9e-4ad9-8ca4-a3c421833a0e (pull-images) was prepared for execution. 2026-03-11 00:41:26.296942 | orchestrator | 2026-03-11 00:41:26 | INFO  | Task 15f096a6-1f9e-4ad9-8ca4-a3c421833a0e is running in background. No more output. Check ARA for logs. 2026-03-11 00:41:28.272505 | orchestrator | 2026-03-11 00:41:28 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-11 00:41:38.437455 | orchestrator | 2026-03-11 00:41:38 | INFO  | Task 16c85799-02aa-442b-a6b4-25334e9b7013 (wipe-partitions) was prepared for execution. 2026-03-11 00:41:38.438444 | orchestrator | 2026-03-11 00:41:38 | INFO  | It takes a moment until task 16c85799-02aa-442b-a6b4-25334e9b7013 (wipe-partitions) has been started and output is visible here. 2026-03-11 00:41:49.840604 | orchestrator | 2026-03-11 00:41:49.840730 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-11 00:41:49.840749 | orchestrator | 2026-03-11 00:41:49.840760 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-11 00:41:49.840777 | orchestrator | Wednesday 11 March 2026 00:41:41 +0000 (0:00:00.143) 0:00:00.143 ******* 2026-03-11 00:41:49.840788 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:41:49.840802 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:41:49.840814 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:41:49.840825 | orchestrator | 2026-03-11 00:41:49.840837 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-11 00:41:49.840888 | orchestrator | Wednesday 11 March 2026 00:41:42 +0000 (0:00:00.563) 0:00:00.706 ******* 2026-03-11 00:41:49.840900 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:49.840911 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:49.840923 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:41:49.840940 | orchestrator | 2026-03-11 00:41:49.840951 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-11 00:41:49.840962 | orchestrator | Wednesday 11 March 2026 00:41:42 +0000 (0:00:00.292) 0:00:00.999 ******* 2026-03-11 00:41:49.840969 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:41:49.840976 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:41:49.840983 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:41:49.840990 | orchestrator | 2026-03-11 00:41:49.840997 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-11 00:41:49.841004 | orchestrator | Wednesday 11 March 2026 00:41:43 +0000 (0:00:00.548) 0:00:01.548 ******* 2026-03-11 00:41:49.841010 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:49.841018 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:49.841030 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:41:49.841040 | orchestrator | 2026-03-11 00:41:49.841051 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-11 00:41:49.841063 | orchestrator | Wednesday 11 March 2026 00:41:43 +0000 (0:00:00.227) 0:00:01.775 ******* 2026-03-11 00:41:49.841075 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-11 00:41:49.841090 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-11 00:41:49.841101 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-11 00:41:49.841112 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-11 00:41:49.841123 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-11 00:41:49.841189 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-11 00:41:49.841201 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-11 00:41:49.841212 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-11 00:41:49.841224 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-11 00:41:49.841236 | orchestrator | 2026-03-11 00:41:49.841247 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-11 00:41:49.841259 | orchestrator | Wednesday 11 March 2026 00:41:44 +0000 (0:00:01.246) 0:00:03.022 ******* 2026-03-11 00:41:49.841271 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-11 00:41:49.841283 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-11 00:41:49.841295 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-11 00:41:49.841307 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-11 00:41:49.841319 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-11 00:41:49.841331 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-11 00:41:49.841344 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-11 00:41:49.841355 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-11 00:41:49.841368 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-11 00:41:49.841380 | orchestrator | 2026-03-11 00:41:49.841392 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-11 00:41:49.841404 | orchestrator | Wednesday 11 March 2026 00:41:46 +0000 (0:00:01.529) 0:00:04.551 ******* 2026-03-11 00:41:49.841416 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-11 00:41:49.841427 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-11 00:41:49.841439 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-11 00:41:49.841451 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-11 00:41:49.841462 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-11 00:41:49.841484 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-11 00:41:49.841496 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-11 00:41:49.841508 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-11 00:41:49.841531 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-11 00:41:49.841543 | orchestrator | 2026-03-11 00:41:49.841554 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-11 00:41:49.841566 | orchestrator | Wednesday 11 March 2026 00:41:48 +0000 (0:00:02.230) 0:00:06.782 ******* 2026-03-11 00:41:49.841578 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:41:49.841589 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:41:49.841601 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:41:49.841613 | orchestrator | 2026-03-11 00:41:49.841624 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-11 00:41:49.841636 | orchestrator | Wednesday 11 March 2026 00:41:48 +0000 (0:00:00.612) 0:00:07.394 ******* 2026-03-11 00:41:49.841648 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:41:49.841659 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:41:49.841670 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:41:49.841682 | orchestrator | 2026-03-11 00:41:49.841694 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:41:49.841708 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:41:49.841723 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:41:49.841756 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:41:49.841768 | orchestrator | 2026-03-11 00:41:49.841781 | orchestrator | 2026-03-11 00:41:49.841793 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:41:49.841805 | orchestrator | Wednesday 11 March 2026 00:41:49 +0000 (0:00:00.641) 0:00:08.035 ******* 2026-03-11 00:41:49.841816 | orchestrator | =============================================================================== 2026-03-11 00:41:49.841828 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.23s 2026-03-11 00:41:49.841840 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.53s 2026-03-11 00:41:49.841852 | orchestrator | Check device availability ----------------------------------------------- 1.25s 2026-03-11 00:41:49.841864 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2026-03-11 00:41:49.841875 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2026-03-11 00:41:49.841887 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.56s 2026-03-11 00:41:49.841899 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.55s 2026-03-11 00:41:49.841911 | orchestrator | Remove all rook related logical devices --------------------------------- 0.29s 2026-03-11 00:41:49.841922 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2026-03-11 00:42:02.290568 | orchestrator | 2026-03-11 00:42:02 | INFO  | Task 7772f44b-922d-4df7-b073-a9c1ff9d423f (facts) was prepared for execution. 2026-03-11 00:42:02.290680 | orchestrator | 2026-03-11 00:42:02 | INFO  | It takes a moment until task 7772f44b-922d-4df7-b073-a9c1ff9d423f (facts) has been started and output is visible here. 2026-03-11 00:42:15.367892 | orchestrator | 2026-03-11 00:42:15.367992 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-11 00:42:15.368007 | orchestrator | 2026-03-11 00:42:15.368018 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-11 00:42:15.368029 | orchestrator | Wednesday 11 March 2026 00:42:06 +0000 (0:00:00.238) 0:00:00.238 ******* 2026-03-11 00:42:15.368039 | orchestrator | ok: [testbed-manager] 2026-03-11 00:42:15.368050 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:42:15.368060 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:42:15.368070 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:42:15.368163 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:42:15.368176 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:42:15.368186 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:42:15.368196 | orchestrator | 2026-03-11 00:42:15.368208 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-11 00:42:15.368218 | orchestrator | Wednesday 11 March 2026 00:42:07 +0000 (0:00:01.008) 0:00:01.247 ******* 2026-03-11 00:42:15.368228 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:42:15.368239 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:42:15.368249 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:42:15.368258 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:42:15.368268 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:15.368277 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:15.368287 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:15.368296 | orchestrator | 2026-03-11 00:42:15.368306 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-11 00:42:15.368315 | orchestrator | 2026-03-11 00:42:15.368325 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-11 00:42:15.368335 | orchestrator | Wednesday 11 March 2026 00:42:08 +0000 (0:00:01.084) 0:00:02.331 ******* 2026-03-11 00:42:15.368345 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:42:15.368354 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:42:15.368364 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:42:15.368374 | orchestrator | ok: [testbed-manager] 2026-03-11 00:42:15.368384 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:42:15.368393 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:42:15.368405 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:42:15.368421 | orchestrator | 2026-03-11 00:42:15.368438 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-11 00:42:15.368454 | orchestrator | 2026-03-11 00:42:15.368471 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-11 00:42:15.368506 | orchestrator | Wednesday 11 March 2026 00:42:14 +0000 (0:00:05.978) 0:00:08.310 ******* 2026-03-11 00:42:15.368525 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:42:15.368543 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:42:15.368562 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:42:15.368579 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:42:15.368593 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:15.368603 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:15.368615 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:15.368625 | orchestrator | 2026-03-11 00:42:15.368637 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:42:15.368648 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:42:15.368660 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:42:15.368671 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:42:15.368682 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:42:15.368694 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:42:15.368706 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:42:15.368724 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:42:15.368740 | orchestrator | 2026-03-11 00:42:15.368757 | orchestrator | 2026-03-11 00:42:15.368775 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:42:15.368804 | orchestrator | Wednesday 11 March 2026 00:42:14 +0000 (0:00:00.519) 0:00:08.830 ******* 2026-03-11 00:42:15.368821 | orchestrator | =============================================================================== 2026-03-11 00:42:15.368838 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.98s 2026-03-11 00:42:15.368856 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.08s 2026-03-11 00:42:15.368873 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.01s 2026-03-11 00:42:15.368890 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-03-11 00:42:17.715860 | orchestrator | 2026-03-11 00:42:17 | INFO  | Task 71a0aca4-7e82-4b22-b1c4-b164d42f066b (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-11 00:42:17.715960 | orchestrator | 2026-03-11 00:42:17 | INFO  | It takes a moment until task 71a0aca4-7e82-4b22-b1c4-b164d42f066b (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-11 00:42:28.105240 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-11 00:42:28.105326 | orchestrator | 2.16.14 2026-03-11 00:42:28.105338 | orchestrator | 2026-03-11 00:42:28.105346 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-11 00:42:28.105355 | orchestrator | 2026-03-11 00:42:28.105364 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-11 00:42:28.105372 | orchestrator | Wednesday 11 March 2026 00:42:21 +0000 (0:00:00.270) 0:00:00.270 ******* 2026-03-11 00:42:28.105380 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-11 00:42:28.105387 | orchestrator | 2026-03-11 00:42:28.105394 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-11 00:42:28.105401 | orchestrator | Wednesday 11 March 2026 00:42:22 +0000 (0:00:00.230) 0:00:00.501 ******* 2026-03-11 00:42:28.105407 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:42:28.105425 | orchestrator | 2026-03-11 00:42:28.105433 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:28.105441 | orchestrator | Wednesday 11 March 2026 00:42:22 +0000 (0:00:00.201) 0:00:00.702 ******* 2026-03-11 00:42:28.105456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-11 00:42:28.105465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-11 00:42:28.105471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-11 00:42:28.105477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-11 00:42:28.105483 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-11 00:42:28.105489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-11 00:42:28.105496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-11 00:42:28.105502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-11 00:42:28.105509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-11 00:42:28.105516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-11 00:42:28.105530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-11 00:42:28.105536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-11 00:42:28.105542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-11 00:42:28.105548 | orchestrator | 2026-03-11 00:42:28.105555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:28.105561 | orchestrator | Wednesday 11 March 2026 00:42:22 +0000 (0:00:00.417) 0:00:01.120 ******* 2026-03-11 00:42:28.105588 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:28.105597 | orchestrator | 2026-03-11 00:42:28.105603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:28.105609 | orchestrator | Wednesday 11 March 2026 00:42:22 +0000 (0:00:00.190) 0:00:01.310 ******* 2026-03-11 00:42:28.105615 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:28.105621 | orchestrator | 2026-03-11 00:42:28.105627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:28.105633 | orchestrator | Wednesday 11 March 2026 00:42:23 +0000 (0:00:00.180) 0:00:01.491 ******* 2026-03-11 00:42:28.105639 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:28.105646 | orchestrator | 2026-03-11 00:42:28.105652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:28.105658 | orchestrator | Wednesday 11 March 2026 00:42:23 +0000 (0:00:00.177) 0:00:01.669 ******* 2026-03-11 00:42:28.105668 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:28.105674 | orchestrator | 2026-03-11 00:42:28.105681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:28.105686 | orchestrator | Wednesday 11 March 2026 00:42:23 +0000 (0:00:00.188) 0:00:01.858 ******* 2026-03-11 00:42:28.105692 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:28.105699 | orchestrator | 2026-03-11 00:42:28.105705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:28.105712 | orchestrator | Wednesday 11 March 2026 00:42:23 +0000 (0:00:00.177) 0:00:02.036 ******* 2026-03-11 00:42:28.105718 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:28.105725 | orchestrator | 2026-03-11 00:42:28.105731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:28.105737 | orchestrator | Wednesday 11 March 2026 00:42:23 +0000 (0:00:00.180) 0:00:02.217 ******* 2026-03-11 00:42:28.105744 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:28.105750 | orchestrator | 2026-03-11 00:42:28.105756 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:28.105763 | orchestrator | Wednesday 11 March 2026 00:42:23 +0000 (0:00:00.180) 0:00:02.397 ******* 2026-03-11 00:42:28.105769 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:28.105775 | orchestrator | 2026-03-11 00:42:28.105781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:28.105786 | orchestrator | Wednesday 11 March 2026 00:42:24 +0000 (0:00:00.175) 0:00:02.572 ******* 2026-03-11 00:42:28.105791 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1) 2026-03-11 00:42:28.105799 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1) 2026-03-11 00:42:28.105805 | orchestrator | 2026-03-11 00:42:28.105812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:28.105834 | orchestrator | Wednesday 11 March 2026 00:42:24 +0000 (0:00:00.366) 0:00:02.939 ******* 2026-03-11 00:42:28.105842 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bb163787-5642-41ea-bb50-14394c4239c7) 2026-03-11 00:42:28.105849 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bb163787-5642-41ea-bb50-14394c4239c7) 2026-03-11 00:42:28.105855 | orchestrator | 2026-03-11 00:42:28.105862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:28.105869 | orchestrator | Wednesday 11 March 2026 00:42:24 +0000 (0:00:00.502) 0:00:03.441 ******* 2026-03-11 00:42:28.105875 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ac11d5e4-d53a-4ea4-b7b7-81bfc32957e4) 2026-03-11 00:42:28.105881 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ac11d5e4-d53a-4ea4-b7b7-81bfc32957e4) 2026-03-11 00:42:28.105888 | orchestrator | 2026-03-11 00:42:28.105895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:28.105901 | orchestrator | Wednesday 11 March 2026 00:42:25 +0000 (0:00:00.527) 0:00:03.968 ******* 2026-03-11 00:42:28.105913 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ef09bb17-59a8-4317-bed7-0146c94a1062) 2026-03-11 00:42:28.105921 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ef09bb17-59a8-4317-bed7-0146c94a1062) 2026-03-11 00:42:28.105928 | orchestrator | 2026-03-11 00:42:28.105935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:28.105941 | orchestrator | Wednesday 11 March 2026 00:42:26 +0000 (0:00:00.645) 0:00:04.614 ******* 2026-03-11 00:42:28.105948 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-11 00:42:28.105954 | orchestrator | 2026-03-11 00:42:28.105965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:28.105971 | orchestrator | Wednesday 11 March 2026 00:42:26 +0000 (0:00:00.304) 0:00:04.919 ******* 2026-03-11 00:42:28.105978 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-11 00:42:28.105985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-11 00:42:28.105991 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-11 00:42:28.105998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-11 00:42:28.106004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-11 00:42:28.106011 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-11 00:42:28.106063 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-11 00:42:28.106103 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-11 00:42:28.106108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-11 00:42:28.106113 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-11 00:42:28.106117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-11 00:42:28.106121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-11 00:42:28.106126 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-11 00:42:28.106130 | orchestrator | 2026-03-11 00:42:28.106135 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:28.106139 | orchestrator | Wednesday 11 March 2026 00:42:26 +0000 (0:00:00.317) 0:00:05.236 ******* 2026-03-11 00:42:28.106143 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:28.106148 | orchestrator | 2026-03-11 00:42:28.106153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:28.106156 | orchestrator | Wednesday 11 March 2026 00:42:26 +0000 (0:00:00.178) 0:00:05.415 ******* 2026-03-11 00:42:28.106161 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:28.106167 | orchestrator | 2026-03-11 00:42:28.106173 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:28.106180 | orchestrator | Wednesday 11 March 2026 00:42:27 +0000 (0:00:00.177) 0:00:05.592 ******* 2026-03-11 00:42:28.106186 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:28.106193 | orchestrator | 2026-03-11 00:42:28.106199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:28.106206 | orchestrator | Wednesday 11 March 2026 00:42:27 +0000 (0:00:00.175) 0:00:05.768 ******* 2026-03-11 00:42:28.106213 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:28.106220 | orchestrator | 2026-03-11 00:42:28.106226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:28.106233 | orchestrator | Wednesday 11 March 2026 00:42:27 +0000 (0:00:00.188) 0:00:05.956 ******* 2026-03-11 00:42:28.106246 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:28.106252 | orchestrator | 2026-03-11 00:42:28.106259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:28.106266 | orchestrator | Wednesday 11 March 2026 00:42:27 +0000 (0:00:00.185) 0:00:06.142 ******* 2026-03-11 00:42:28.106272 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:28.106279 | orchestrator | 2026-03-11 00:42:28.106285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:28.106291 | orchestrator | Wednesday 11 March 2026 00:42:27 +0000 (0:00:00.208) 0:00:06.350 ******* 2026-03-11 00:42:28.106299 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:28.106305 | orchestrator | 2026-03-11 00:42:28.106321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:35.608607 | orchestrator | Wednesday 11 March 2026 00:42:28 +0000 (0:00:00.199) 0:00:06.550 ******* 2026-03-11 00:42:35.608714 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.608732 | orchestrator | 2026-03-11 00:42:35.608745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:35.608757 | orchestrator | Wednesday 11 March 2026 00:42:28 +0000 (0:00:00.215) 0:00:06.766 ******* 2026-03-11 00:42:35.608768 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-11 00:42:35.608779 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-11 00:42:35.608791 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-11 00:42:35.608801 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-11 00:42:35.608812 | orchestrator | 2026-03-11 00:42:35.608823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:35.608834 | orchestrator | Wednesday 11 March 2026 00:42:29 +0000 (0:00:01.001) 0:00:07.767 ******* 2026-03-11 00:42:35.608845 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.608856 | orchestrator | 2026-03-11 00:42:35.608866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:35.608877 | orchestrator | Wednesday 11 March 2026 00:42:29 +0000 (0:00:00.221) 0:00:07.989 ******* 2026-03-11 00:42:35.608888 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.608905 | orchestrator | 2026-03-11 00:42:35.608923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:35.608942 | orchestrator | Wednesday 11 March 2026 00:42:29 +0000 (0:00:00.197) 0:00:08.186 ******* 2026-03-11 00:42:35.608958 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.608975 | orchestrator | 2026-03-11 00:42:35.608995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:35.609014 | orchestrator | Wednesday 11 March 2026 00:42:29 +0000 (0:00:00.204) 0:00:08.391 ******* 2026-03-11 00:42:35.609034 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.609096 | orchestrator | 2026-03-11 00:42:35.609127 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-11 00:42:35.609145 | orchestrator | Wednesday 11 March 2026 00:42:30 +0000 (0:00:00.199) 0:00:08.591 ******* 2026-03-11 00:42:35.609163 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-11 00:42:35.609182 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-11 00:42:35.609200 | orchestrator | 2026-03-11 00:42:35.609247 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-11 00:42:35.609270 | orchestrator | Wednesday 11 March 2026 00:42:30 +0000 (0:00:00.184) 0:00:08.775 ******* 2026-03-11 00:42:35.609290 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.609311 | orchestrator | 2026-03-11 00:42:35.609328 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-11 00:42:35.609347 | orchestrator | Wednesday 11 March 2026 00:42:30 +0000 (0:00:00.134) 0:00:08.910 ******* 2026-03-11 00:42:35.609366 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.609386 | orchestrator | 2026-03-11 00:42:35.609405 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-11 00:42:35.609423 | orchestrator | Wednesday 11 March 2026 00:42:30 +0000 (0:00:00.145) 0:00:09.055 ******* 2026-03-11 00:42:35.609473 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.609492 | orchestrator | 2026-03-11 00:42:35.609510 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-11 00:42:35.609529 | orchestrator | Wednesday 11 March 2026 00:42:30 +0000 (0:00:00.151) 0:00:09.207 ******* 2026-03-11 00:42:35.609548 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:42:35.609568 | orchestrator | 2026-03-11 00:42:35.609586 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-11 00:42:35.609603 | orchestrator | Wednesday 11 March 2026 00:42:30 +0000 (0:00:00.134) 0:00:09.342 ******* 2026-03-11 00:42:35.609622 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1f24027a-cb62-5112-a2b4-0ff1a158a780'}}) 2026-03-11 00:42:35.609641 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '930a51f3-082d-5f24-af57-1314a0ff4b68'}}) 2026-03-11 00:42:35.609659 | orchestrator | 2026-03-11 00:42:35.609677 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-11 00:42:35.609696 | orchestrator | Wednesday 11 March 2026 00:42:31 +0000 (0:00:00.182) 0:00:09.525 ******* 2026-03-11 00:42:35.609716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1f24027a-cb62-5112-a2b4-0ff1a158a780'}})  2026-03-11 00:42:35.609744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '930a51f3-082d-5f24-af57-1314a0ff4b68'}})  2026-03-11 00:42:35.609763 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.609781 | orchestrator | 2026-03-11 00:42:35.609799 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-11 00:42:35.609818 | orchestrator | Wednesday 11 March 2026 00:42:31 +0000 (0:00:00.145) 0:00:09.670 ******* 2026-03-11 00:42:35.609837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1f24027a-cb62-5112-a2b4-0ff1a158a780'}})  2026-03-11 00:42:35.609857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '930a51f3-082d-5f24-af57-1314a0ff4b68'}})  2026-03-11 00:42:35.609875 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.609892 | orchestrator | 2026-03-11 00:42:35.609903 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-11 00:42:35.609914 | orchestrator | Wednesday 11 March 2026 00:42:31 +0000 (0:00:00.356) 0:00:10.026 ******* 2026-03-11 00:42:35.609925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1f24027a-cb62-5112-a2b4-0ff1a158a780'}})  2026-03-11 00:42:35.609958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '930a51f3-082d-5f24-af57-1314a0ff4b68'}})  2026-03-11 00:42:35.609970 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.609981 | orchestrator | 2026-03-11 00:42:35.609991 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-11 00:42:35.610011 | orchestrator | Wednesday 11 March 2026 00:42:31 +0000 (0:00:00.155) 0:00:10.182 ******* 2026-03-11 00:42:35.610128 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:42:35.610141 | orchestrator | 2026-03-11 00:42:35.610152 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-11 00:42:35.610162 | orchestrator | Wednesday 11 March 2026 00:42:31 +0000 (0:00:00.144) 0:00:10.326 ******* 2026-03-11 00:42:35.610173 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:42:35.610184 | orchestrator | 2026-03-11 00:42:35.610230 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-11 00:42:35.610242 | orchestrator | Wednesday 11 March 2026 00:42:32 +0000 (0:00:00.137) 0:00:10.464 ******* 2026-03-11 00:42:35.610252 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.610263 | orchestrator | 2026-03-11 00:42:35.610274 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-11 00:42:35.610284 | orchestrator | Wednesday 11 March 2026 00:42:32 +0000 (0:00:00.133) 0:00:10.597 ******* 2026-03-11 00:42:35.610310 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.610321 | orchestrator | 2026-03-11 00:42:35.610332 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-11 00:42:35.610343 | orchestrator | Wednesday 11 March 2026 00:42:32 +0000 (0:00:00.136) 0:00:10.733 ******* 2026-03-11 00:42:35.610354 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.610364 | orchestrator | 2026-03-11 00:42:35.610375 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-11 00:42:35.610386 | orchestrator | Wednesday 11 March 2026 00:42:32 +0000 (0:00:00.133) 0:00:10.867 ******* 2026-03-11 00:42:35.610396 | orchestrator | ok: [testbed-node-3] => { 2026-03-11 00:42:35.610407 | orchestrator |  "ceph_osd_devices": { 2026-03-11 00:42:35.610418 | orchestrator |  "sdb": { 2026-03-11 00:42:35.610430 | orchestrator |  "osd_lvm_uuid": "1f24027a-cb62-5112-a2b4-0ff1a158a780" 2026-03-11 00:42:35.610441 | orchestrator |  }, 2026-03-11 00:42:35.610452 | orchestrator |  "sdc": { 2026-03-11 00:42:35.610463 | orchestrator |  "osd_lvm_uuid": "930a51f3-082d-5f24-af57-1314a0ff4b68" 2026-03-11 00:42:35.610474 | orchestrator |  } 2026-03-11 00:42:35.610485 | orchestrator |  } 2026-03-11 00:42:35.610496 | orchestrator | } 2026-03-11 00:42:35.610508 | orchestrator | 2026-03-11 00:42:35.610519 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-11 00:42:35.610530 | orchestrator | Wednesday 11 March 2026 00:42:32 +0000 (0:00:00.142) 0:00:11.010 ******* 2026-03-11 00:42:35.610540 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.610551 | orchestrator | 2026-03-11 00:42:35.610562 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-11 00:42:35.610573 | orchestrator | Wednesday 11 March 2026 00:42:32 +0000 (0:00:00.135) 0:00:11.145 ******* 2026-03-11 00:42:35.610583 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.610594 | orchestrator | 2026-03-11 00:42:35.610605 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-11 00:42:35.610616 | orchestrator | Wednesday 11 March 2026 00:42:32 +0000 (0:00:00.135) 0:00:11.281 ******* 2026-03-11 00:42:35.610626 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:42:35.610637 | orchestrator | 2026-03-11 00:42:35.610648 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-11 00:42:35.610658 | orchestrator | Wednesday 11 March 2026 00:42:32 +0000 (0:00:00.125) 0:00:11.407 ******* 2026-03-11 00:42:35.610669 | orchestrator | changed: [testbed-node-3] => { 2026-03-11 00:42:35.610680 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-11 00:42:35.610691 | orchestrator |  "ceph_osd_devices": { 2026-03-11 00:42:35.610702 | orchestrator |  "sdb": { 2026-03-11 00:42:35.610713 | orchestrator |  "osd_lvm_uuid": "1f24027a-cb62-5112-a2b4-0ff1a158a780" 2026-03-11 00:42:35.610724 | orchestrator |  }, 2026-03-11 00:42:35.610735 | orchestrator |  "sdc": { 2026-03-11 00:42:35.610746 | orchestrator |  "osd_lvm_uuid": "930a51f3-082d-5f24-af57-1314a0ff4b68" 2026-03-11 00:42:35.610762 | orchestrator |  } 2026-03-11 00:42:35.610782 | orchestrator |  }, 2026-03-11 00:42:35.610800 | orchestrator |  "lvm_volumes": [ 2026-03-11 00:42:35.610818 | orchestrator |  { 2026-03-11 00:42:35.610839 | orchestrator |  "data": "osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780", 2026-03-11 00:42:35.610860 | orchestrator |  "data_vg": "ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780" 2026-03-11 00:42:35.610879 | orchestrator |  }, 2026-03-11 00:42:35.610893 | orchestrator |  { 2026-03-11 00:42:35.610905 | orchestrator |  "data": "osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68", 2026-03-11 00:42:35.610915 | orchestrator |  "data_vg": "ceph-930a51f3-082d-5f24-af57-1314a0ff4b68" 2026-03-11 00:42:35.610933 | orchestrator |  } 2026-03-11 00:42:35.610944 | orchestrator |  ] 2026-03-11 00:42:35.610955 | orchestrator |  } 2026-03-11 00:42:35.610966 | orchestrator | } 2026-03-11 00:42:35.610985 | orchestrator | 2026-03-11 00:42:35.610996 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-11 00:42:35.611007 | orchestrator | Wednesday 11 March 2026 00:42:33 +0000 (0:00:00.398) 0:00:11.805 ******* 2026-03-11 00:42:35.611018 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-11 00:42:35.611029 | orchestrator | 2026-03-11 00:42:35.611039 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-11 00:42:35.611050 | orchestrator | 2026-03-11 00:42:35.611139 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-11 00:42:35.611151 | orchestrator | Wednesday 11 March 2026 00:42:35 +0000 (0:00:01.764) 0:00:13.569 ******* 2026-03-11 00:42:35.611162 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-11 00:42:35.611173 | orchestrator | 2026-03-11 00:42:35.611184 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-11 00:42:35.611194 | orchestrator | Wednesday 11 March 2026 00:42:35 +0000 (0:00:00.241) 0:00:13.811 ******* 2026-03-11 00:42:35.611205 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:42:35.611216 | orchestrator | 2026-03-11 00:42:35.611238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:43.276096 | orchestrator | Wednesday 11 March 2026 00:42:35 +0000 (0:00:00.243) 0:00:14.055 ******* 2026-03-11 00:42:43.276195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-11 00:42:43.276207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-11 00:42:43.276214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-11 00:42:43.276222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-11 00:42:43.276227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-11 00:42:43.276231 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-11 00:42:43.276235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-11 00:42:43.276239 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-11 00:42:43.276244 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-11 00:42:43.276248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-11 00:42:43.276252 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-11 00:42:43.276255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-11 00:42:43.276262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-11 00:42:43.276266 | orchestrator | 2026-03-11 00:42:43.276272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:43.276276 | orchestrator | Wednesday 11 March 2026 00:42:35 +0000 (0:00:00.374) 0:00:14.429 ******* 2026-03-11 00:42:43.276280 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276285 | orchestrator | 2026-03-11 00:42:43.276289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:43.276293 | orchestrator | Wednesday 11 March 2026 00:42:36 +0000 (0:00:00.197) 0:00:14.627 ******* 2026-03-11 00:42:43.276297 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276300 | orchestrator | 2026-03-11 00:42:43.276304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:43.276308 | orchestrator | Wednesday 11 March 2026 00:42:36 +0000 (0:00:00.194) 0:00:14.821 ******* 2026-03-11 00:42:43.276312 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276316 | orchestrator | 2026-03-11 00:42:43.276320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:43.276324 | orchestrator | Wednesday 11 March 2026 00:42:36 +0000 (0:00:00.175) 0:00:14.997 ******* 2026-03-11 00:42:43.276345 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276349 | orchestrator | 2026-03-11 00:42:43.276354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:43.276358 | orchestrator | Wednesday 11 March 2026 00:42:36 +0000 (0:00:00.185) 0:00:15.182 ******* 2026-03-11 00:42:43.276370 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276374 | orchestrator | 2026-03-11 00:42:43.276378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:43.276382 | orchestrator | Wednesday 11 March 2026 00:42:37 +0000 (0:00:00.606) 0:00:15.789 ******* 2026-03-11 00:42:43.276386 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276389 | orchestrator | 2026-03-11 00:42:43.276406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:43.276410 | orchestrator | Wednesday 11 March 2026 00:42:37 +0000 (0:00:00.205) 0:00:15.995 ******* 2026-03-11 00:42:43.276414 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276418 | orchestrator | 2026-03-11 00:42:43.276422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:43.276426 | orchestrator | Wednesday 11 March 2026 00:42:37 +0000 (0:00:00.222) 0:00:16.217 ******* 2026-03-11 00:42:43.276430 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276434 | orchestrator | 2026-03-11 00:42:43.276437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:43.276441 | orchestrator | Wednesday 11 March 2026 00:42:37 +0000 (0:00:00.177) 0:00:16.395 ******* 2026-03-11 00:42:43.276445 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5) 2026-03-11 00:42:43.276450 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5) 2026-03-11 00:42:43.276454 | orchestrator | 2026-03-11 00:42:43.276458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:43.276462 | orchestrator | Wednesday 11 March 2026 00:42:38 +0000 (0:00:00.399) 0:00:16.794 ******* 2026-03-11 00:42:43.276466 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_747dd4bc-1e4a-4053-bdf0-887e0b92b80b) 2026-03-11 00:42:43.276470 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_747dd4bc-1e4a-4053-bdf0-887e0b92b80b) 2026-03-11 00:42:43.276473 | orchestrator | 2026-03-11 00:42:43.276477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:43.276481 | orchestrator | Wednesday 11 March 2026 00:42:38 +0000 (0:00:00.414) 0:00:17.209 ******* 2026-03-11 00:42:43.276485 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1656cb8a-d6e3-4504-aba1-0af808046f0d) 2026-03-11 00:42:43.276489 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1656cb8a-d6e3-4504-aba1-0af808046f0d) 2026-03-11 00:42:43.276493 | orchestrator | 2026-03-11 00:42:43.276496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:43.276511 | orchestrator | Wednesday 11 March 2026 00:42:39 +0000 (0:00:00.428) 0:00:17.637 ******* 2026-03-11 00:42:43.276515 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_37fe87c5-ca63-4522-b75b-0d9e996155b4) 2026-03-11 00:42:43.276519 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_37fe87c5-ca63-4522-b75b-0d9e996155b4) 2026-03-11 00:42:43.276523 | orchestrator | 2026-03-11 00:42:43.276527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:43.276531 | orchestrator | Wednesday 11 March 2026 00:42:39 +0000 (0:00:00.400) 0:00:18.038 ******* 2026-03-11 00:42:43.276534 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-11 00:42:43.276538 | orchestrator | 2026-03-11 00:42:43.276542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:43.276546 | orchestrator | Wednesday 11 March 2026 00:42:39 +0000 (0:00:00.316) 0:00:18.355 ******* 2026-03-11 00:42:43.276550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-11 00:42:43.276558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-11 00:42:43.276562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-11 00:42:43.276566 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-11 00:42:43.276569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-11 00:42:43.276573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-11 00:42:43.276577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-11 00:42:43.276581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-11 00:42:43.276585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-11 00:42:43.276588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-11 00:42:43.276592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-11 00:42:43.276596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-11 00:42:43.276600 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-11 00:42:43.276604 | orchestrator | 2026-03-11 00:42:43.276608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:43.276611 | orchestrator | Wednesday 11 March 2026 00:42:40 +0000 (0:00:00.363) 0:00:18.719 ******* 2026-03-11 00:42:43.276615 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276626 | orchestrator | 2026-03-11 00:42:43.276631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:43.276638 | orchestrator | Wednesday 11 March 2026 00:42:40 +0000 (0:00:00.683) 0:00:19.402 ******* 2026-03-11 00:42:43.276642 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276647 | orchestrator | 2026-03-11 00:42:43.276651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:43.276656 | orchestrator | Wednesday 11 March 2026 00:42:41 +0000 (0:00:00.212) 0:00:19.615 ******* 2026-03-11 00:42:43.276660 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276664 | orchestrator | 2026-03-11 00:42:43.276669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:43.276673 | orchestrator | Wednesday 11 March 2026 00:42:41 +0000 (0:00:00.184) 0:00:19.799 ******* 2026-03-11 00:42:43.276677 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276681 | orchestrator | 2026-03-11 00:42:43.276686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:43.276690 | orchestrator | Wednesday 11 March 2026 00:42:41 +0000 (0:00:00.191) 0:00:19.990 ******* 2026-03-11 00:42:43.276694 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276699 | orchestrator | 2026-03-11 00:42:43.276703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:43.276707 | orchestrator | Wednesday 11 March 2026 00:42:41 +0000 (0:00:00.169) 0:00:20.160 ******* 2026-03-11 00:42:43.276717 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276722 | orchestrator | 2026-03-11 00:42:43.276726 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:43.276730 | orchestrator | Wednesday 11 March 2026 00:42:41 +0000 (0:00:00.182) 0:00:20.343 ******* 2026-03-11 00:42:43.276735 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276739 | orchestrator | 2026-03-11 00:42:43.276743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:43.276748 | orchestrator | Wednesday 11 March 2026 00:42:42 +0000 (0:00:00.194) 0:00:20.538 ******* 2026-03-11 00:42:43.276752 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:43.276759 | orchestrator | 2026-03-11 00:42:43.276764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:43.276768 | orchestrator | Wednesday 11 March 2026 00:42:42 +0000 (0:00:00.254) 0:00:20.792 ******* 2026-03-11 00:42:43.276772 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-11 00:42:43.276777 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-11 00:42:43.276782 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-11 00:42:43.276786 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-11 00:42:43.276791 | orchestrator | 2026-03-11 00:42:43.276795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:43.276799 | orchestrator | Wednesday 11 March 2026 00:42:43 +0000 (0:00:00.769) 0:00:21.562 ******* 2026-03-11 00:42:43.276804 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193006 | orchestrator | 2026-03-11 00:42:48.193147 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:48.193159 | orchestrator | Wednesday 11 March 2026 00:42:43 +0000 (0:00:00.163) 0:00:21.725 ******* 2026-03-11 00:42:48.193164 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193170 | orchestrator | 2026-03-11 00:42:48.193175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:48.193180 | orchestrator | Wednesday 11 March 2026 00:42:43 +0000 (0:00:00.186) 0:00:21.912 ******* 2026-03-11 00:42:48.193185 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193190 | orchestrator | 2026-03-11 00:42:48.193195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:48.193199 | orchestrator | Wednesday 11 March 2026 00:42:43 +0000 (0:00:00.165) 0:00:22.077 ******* 2026-03-11 00:42:48.193204 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193209 | orchestrator | 2026-03-11 00:42:48.193213 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-11 00:42:48.193218 | orchestrator | Wednesday 11 March 2026 00:42:44 +0000 (0:00:00.482) 0:00:22.560 ******* 2026-03-11 00:42:48.193222 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-11 00:42:48.193227 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-11 00:42:48.193234 | orchestrator | 2026-03-11 00:42:48.193242 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-11 00:42:48.193249 | orchestrator | Wednesday 11 March 2026 00:42:44 +0000 (0:00:00.145) 0:00:22.705 ******* 2026-03-11 00:42:48.193256 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193263 | orchestrator | 2026-03-11 00:42:48.193271 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-11 00:42:48.193277 | orchestrator | Wednesday 11 March 2026 00:42:44 +0000 (0:00:00.117) 0:00:22.822 ******* 2026-03-11 00:42:48.193284 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193291 | orchestrator | 2026-03-11 00:42:48.193298 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-11 00:42:48.193304 | orchestrator | Wednesday 11 March 2026 00:42:44 +0000 (0:00:00.111) 0:00:22.933 ******* 2026-03-11 00:42:48.193311 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193318 | orchestrator | 2026-03-11 00:42:48.193324 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-11 00:42:48.193331 | orchestrator | Wednesday 11 March 2026 00:42:44 +0000 (0:00:00.105) 0:00:23.039 ******* 2026-03-11 00:42:48.193338 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:42:48.193346 | orchestrator | 2026-03-11 00:42:48.193353 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-11 00:42:48.193359 | orchestrator | Wednesday 11 March 2026 00:42:44 +0000 (0:00:00.110) 0:00:23.150 ******* 2026-03-11 00:42:48.193367 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9a64462a-5614-5a25-979d-2f017565a0c4'}}) 2026-03-11 00:42:48.193375 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e6773b3-a2d9-5476-8e14-434a68284534'}}) 2026-03-11 00:42:48.193404 | orchestrator | 2026-03-11 00:42:48.193411 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-11 00:42:48.193419 | orchestrator | Wednesday 11 March 2026 00:42:44 +0000 (0:00:00.141) 0:00:23.292 ******* 2026-03-11 00:42:48.193428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9a64462a-5614-5a25-979d-2f017565a0c4'}})  2026-03-11 00:42:48.193453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e6773b3-a2d9-5476-8e14-434a68284534'}})  2026-03-11 00:42:48.193460 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193465 | orchestrator | 2026-03-11 00:42:48.193470 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-11 00:42:48.193475 | orchestrator | Wednesday 11 March 2026 00:42:44 +0000 (0:00:00.111) 0:00:23.404 ******* 2026-03-11 00:42:48.193480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9a64462a-5614-5a25-979d-2f017565a0c4'}})  2026-03-11 00:42:48.193485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e6773b3-a2d9-5476-8e14-434a68284534'}})  2026-03-11 00:42:48.193490 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193495 | orchestrator | 2026-03-11 00:42:48.193499 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-11 00:42:48.193504 | orchestrator | Wednesday 11 March 2026 00:42:45 +0000 (0:00:00.107) 0:00:23.511 ******* 2026-03-11 00:42:48.193509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9a64462a-5614-5a25-979d-2f017565a0c4'}})  2026-03-11 00:42:48.193514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e6773b3-a2d9-5476-8e14-434a68284534'}})  2026-03-11 00:42:48.193519 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193523 | orchestrator | 2026-03-11 00:42:48.193528 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-11 00:42:48.193532 | orchestrator | Wednesday 11 March 2026 00:42:45 +0000 (0:00:00.106) 0:00:23.618 ******* 2026-03-11 00:42:48.193537 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:42:48.193542 | orchestrator | 2026-03-11 00:42:48.193546 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-11 00:42:48.193551 | orchestrator | Wednesday 11 March 2026 00:42:45 +0000 (0:00:00.094) 0:00:23.713 ******* 2026-03-11 00:42:48.193555 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:42:48.193560 | orchestrator | 2026-03-11 00:42:48.193565 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-11 00:42:48.193569 | orchestrator | Wednesday 11 March 2026 00:42:45 +0000 (0:00:00.103) 0:00:23.816 ******* 2026-03-11 00:42:48.193586 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193592 | orchestrator | 2026-03-11 00:42:48.193598 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-11 00:42:48.193603 | orchestrator | Wednesday 11 March 2026 00:42:45 +0000 (0:00:00.220) 0:00:24.036 ******* 2026-03-11 00:42:48.193608 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193613 | orchestrator | 2026-03-11 00:42:48.193619 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-11 00:42:48.193624 | orchestrator | Wednesday 11 March 2026 00:42:45 +0000 (0:00:00.090) 0:00:24.127 ******* 2026-03-11 00:42:48.193629 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193634 | orchestrator | 2026-03-11 00:42:48.193639 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-11 00:42:48.193645 | orchestrator | Wednesday 11 March 2026 00:42:45 +0000 (0:00:00.093) 0:00:24.221 ******* 2026-03-11 00:42:48.193650 | orchestrator | ok: [testbed-node-4] => { 2026-03-11 00:42:48.193656 | orchestrator |  "ceph_osd_devices": { 2026-03-11 00:42:48.193661 | orchestrator |  "sdb": { 2026-03-11 00:42:48.193667 | orchestrator |  "osd_lvm_uuid": "9a64462a-5614-5a25-979d-2f017565a0c4" 2026-03-11 00:42:48.193673 | orchestrator |  }, 2026-03-11 00:42:48.193682 | orchestrator |  "sdc": { 2026-03-11 00:42:48.193688 | orchestrator |  "osd_lvm_uuid": "9e6773b3-a2d9-5476-8e14-434a68284534" 2026-03-11 00:42:48.193693 | orchestrator |  } 2026-03-11 00:42:48.193698 | orchestrator |  } 2026-03-11 00:42:48.193704 | orchestrator | } 2026-03-11 00:42:48.193709 | orchestrator | 2026-03-11 00:42:48.193715 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-11 00:42:48.193720 | orchestrator | Wednesday 11 March 2026 00:42:45 +0000 (0:00:00.100) 0:00:24.321 ******* 2026-03-11 00:42:48.193725 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193730 | orchestrator | 2026-03-11 00:42:48.193735 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-11 00:42:48.193741 | orchestrator | Wednesday 11 March 2026 00:42:45 +0000 (0:00:00.089) 0:00:24.411 ******* 2026-03-11 00:42:48.193746 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193751 | orchestrator | 2026-03-11 00:42:48.193756 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-11 00:42:48.193762 | orchestrator | Wednesday 11 March 2026 00:42:46 +0000 (0:00:00.092) 0:00:24.503 ******* 2026-03-11 00:42:48.193767 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:48.193772 | orchestrator | 2026-03-11 00:42:48.193777 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-11 00:42:48.193782 | orchestrator | Wednesday 11 March 2026 00:42:46 +0000 (0:00:00.121) 0:00:24.625 ******* 2026-03-11 00:42:48.193787 | orchestrator | changed: [testbed-node-4] => { 2026-03-11 00:42:48.193792 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-11 00:42:48.193798 | orchestrator |  "ceph_osd_devices": { 2026-03-11 00:42:48.193803 | orchestrator |  "sdb": { 2026-03-11 00:42:48.193808 | orchestrator |  "osd_lvm_uuid": "9a64462a-5614-5a25-979d-2f017565a0c4" 2026-03-11 00:42:48.193813 | orchestrator |  }, 2026-03-11 00:42:48.193819 | orchestrator |  "sdc": { 2026-03-11 00:42:48.193824 | orchestrator |  "osd_lvm_uuid": "9e6773b3-a2d9-5476-8e14-434a68284534" 2026-03-11 00:42:48.193829 | orchestrator |  } 2026-03-11 00:42:48.193834 | orchestrator |  }, 2026-03-11 00:42:48.193840 | orchestrator |  "lvm_volumes": [ 2026-03-11 00:42:48.193845 | orchestrator |  { 2026-03-11 00:42:48.193850 | orchestrator |  "data": "osd-block-9a64462a-5614-5a25-979d-2f017565a0c4", 2026-03-11 00:42:48.193855 | orchestrator |  "data_vg": "ceph-9a64462a-5614-5a25-979d-2f017565a0c4" 2026-03-11 00:42:48.193861 | orchestrator |  }, 2026-03-11 00:42:48.193866 | orchestrator |  { 2026-03-11 00:42:48.193871 | orchestrator |  "data": "osd-block-9e6773b3-a2d9-5476-8e14-434a68284534", 2026-03-11 00:42:48.193876 | orchestrator |  "data_vg": "ceph-9e6773b3-a2d9-5476-8e14-434a68284534" 2026-03-11 00:42:48.193882 | orchestrator |  } 2026-03-11 00:42:48.193887 | orchestrator |  ] 2026-03-11 00:42:48.193892 | orchestrator |  } 2026-03-11 00:42:48.193897 | orchestrator | } 2026-03-11 00:42:48.193903 | orchestrator | 2026-03-11 00:42:48.193908 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-11 00:42:48.193913 | orchestrator | Wednesday 11 March 2026 00:42:46 +0000 (0:00:00.169) 0:00:24.794 ******* 2026-03-11 00:42:48.193918 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-11 00:42:48.193924 | orchestrator | 2026-03-11 00:42:48.193929 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-11 00:42:48.193934 | orchestrator | 2026-03-11 00:42:48.193940 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-11 00:42:48.193945 | orchestrator | Wednesday 11 March 2026 00:42:47 +0000 (0:00:00.911) 0:00:25.706 ******* 2026-03-11 00:42:48.193949 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-11 00:42:48.193954 | orchestrator | 2026-03-11 00:42:48.193958 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-11 00:42:48.193970 | orchestrator | Wednesday 11 March 2026 00:42:47 +0000 (0:00:00.451) 0:00:26.157 ******* 2026-03-11 00:42:48.193975 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:42:48.193980 | orchestrator | 2026-03-11 00:42:48.193984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:48.193989 | orchestrator | Wednesday 11 March 2026 00:42:47 +0000 (0:00:00.169) 0:00:26.326 ******* 2026-03-11 00:42:48.193994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-11 00:42:48.193998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-11 00:42:48.194003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-11 00:42:48.194007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-11 00:42:48.194012 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-11 00:42:48.194074 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-11 00:42:55.046528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-11 00:42:55.046664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-11 00:42:55.046679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-11 00:42:55.046692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-11 00:42:55.046703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-11 00:42:55.046715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-11 00:42:55.046726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-11 00:42:55.046738 | orchestrator | 2026-03-11 00:42:55.046751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:55.046763 | orchestrator | Wednesday 11 March 2026 00:42:48 +0000 (0:00:00.307) 0:00:26.633 ******* 2026-03-11 00:42:55.046775 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.046787 | orchestrator | 2026-03-11 00:42:55.046798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:55.046809 | orchestrator | Wednesday 11 March 2026 00:42:48 +0000 (0:00:00.192) 0:00:26.826 ******* 2026-03-11 00:42:55.046820 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.046831 | orchestrator | 2026-03-11 00:42:55.046842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:55.046853 | orchestrator | Wednesday 11 March 2026 00:42:48 +0000 (0:00:00.186) 0:00:27.012 ******* 2026-03-11 00:42:55.046864 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.046875 | orchestrator | 2026-03-11 00:42:55.046886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:55.046897 | orchestrator | Wednesday 11 March 2026 00:42:48 +0000 (0:00:00.202) 0:00:27.215 ******* 2026-03-11 00:42:55.046908 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.046919 | orchestrator | 2026-03-11 00:42:55.046930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:55.046941 | orchestrator | Wednesday 11 March 2026 00:42:48 +0000 (0:00:00.142) 0:00:27.358 ******* 2026-03-11 00:42:55.046952 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.046963 | orchestrator | 2026-03-11 00:42:55.046981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:55.047000 | orchestrator | Wednesday 11 March 2026 00:42:49 +0000 (0:00:00.159) 0:00:27.518 ******* 2026-03-11 00:42:55.047019 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.047080 | orchestrator | 2026-03-11 00:42:55.047100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:55.047118 | orchestrator | Wednesday 11 March 2026 00:42:49 +0000 (0:00:00.161) 0:00:27.679 ******* 2026-03-11 00:42:55.047177 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.047196 | orchestrator | 2026-03-11 00:42:55.047215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:55.047235 | orchestrator | Wednesday 11 March 2026 00:42:49 +0000 (0:00:00.159) 0:00:27.839 ******* 2026-03-11 00:42:55.047255 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.047273 | orchestrator | 2026-03-11 00:42:55.047293 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:55.047314 | orchestrator | Wednesday 11 March 2026 00:42:49 +0000 (0:00:00.173) 0:00:28.013 ******* 2026-03-11 00:42:55.047334 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7) 2026-03-11 00:42:55.047357 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7) 2026-03-11 00:42:55.047370 | orchestrator | 2026-03-11 00:42:55.047384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:55.047398 | orchestrator | Wednesday 11 March 2026 00:42:50 +0000 (0:00:00.662) 0:00:28.676 ******* 2026-03-11 00:42:55.047411 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cd4ac081-6fbb-4e27-9e74-8104c0078ac5) 2026-03-11 00:42:55.047422 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cd4ac081-6fbb-4e27-9e74-8104c0078ac5) 2026-03-11 00:42:55.047432 | orchestrator | 2026-03-11 00:42:55.047443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:55.047454 | orchestrator | Wednesday 11 March 2026 00:42:50 +0000 (0:00:00.379) 0:00:29.055 ******* 2026-03-11 00:42:55.047465 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3907c798-8bb0-4366-8422-7f195107ce20) 2026-03-11 00:42:55.047476 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3907c798-8bb0-4366-8422-7f195107ce20) 2026-03-11 00:42:55.047486 | orchestrator | 2026-03-11 00:42:55.047497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:55.047508 | orchestrator | Wednesday 11 March 2026 00:42:51 +0000 (0:00:00.399) 0:00:29.454 ******* 2026-03-11 00:42:55.047518 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7c68b4db-9517-4776-878e-5cc78b8cffbb) 2026-03-11 00:42:55.047529 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7c68b4db-9517-4776-878e-5cc78b8cffbb) 2026-03-11 00:42:55.047540 | orchestrator | 2026-03-11 00:42:55.047551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:55.047561 | orchestrator | Wednesday 11 March 2026 00:42:51 +0000 (0:00:00.399) 0:00:29.854 ******* 2026-03-11 00:42:55.047572 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-11 00:42:55.047583 | orchestrator | 2026-03-11 00:42:55.047594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:55.047625 | orchestrator | Wednesday 11 March 2026 00:42:51 +0000 (0:00:00.308) 0:00:30.162 ******* 2026-03-11 00:42:55.047636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-11 00:42:55.047647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-11 00:42:55.047658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-11 00:42:55.047669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-11 00:42:55.047679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-11 00:42:55.047712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-11 00:42:55.047723 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-11 00:42:55.047734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-11 00:42:55.047755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-11 00:42:55.047766 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-11 00:42:55.047777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-11 00:42:55.047788 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-11 00:42:55.047798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-11 00:42:55.047809 | orchestrator | 2026-03-11 00:42:55.047820 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:55.047831 | orchestrator | Wednesday 11 March 2026 00:42:52 +0000 (0:00:00.316) 0:00:30.479 ******* 2026-03-11 00:42:55.047842 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.047852 | orchestrator | 2026-03-11 00:42:55.047863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:55.047874 | orchestrator | Wednesday 11 March 2026 00:42:52 +0000 (0:00:00.183) 0:00:30.662 ******* 2026-03-11 00:42:55.047884 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.047895 | orchestrator | 2026-03-11 00:42:55.047906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:55.047917 | orchestrator | Wednesday 11 March 2026 00:42:52 +0000 (0:00:00.187) 0:00:30.850 ******* 2026-03-11 00:42:55.047933 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.047944 | orchestrator | 2026-03-11 00:42:55.047955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:55.047966 | orchestrator | Wednesday 11 March 2026 00:42:52 +0000 (0:00:00.164) 0:00:31.014 ******* 2026-03-11 00:42:55.047976 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.047987 | orchestrator | 2026-03-11 00:42:55.047998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:55.048008 | orchestrator | Wednesday 11 March 2026 00:42:52 +0000 (0:00:00.193) 0:00:31.207 ******* 2026-03-11 00:42:55.048019 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.048065 | orchestrator | 2026-03-11 00:42:55.048077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:55.048087 | orchestrator | Wednesday 11 March 2026 00:42:52 +0000 (0:00:00.175) 0:00:31.382 ******* 2026-03-11 00:42:55.048098 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.048109 | orchestrator | 2026-03-11 00:42:55.048120 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:55.048131 | orchestrator | Wednesday 11 March 2026 00:42:53 +0000 (0:00:00.511) 0:00:31.893 ******* 2026-03-11 00:42:55.048141 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.048152 | orchestrator | 2026-03-11 00:42:55.048163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:55.048173 | orchestrator | Wednesday 11 March 2026 00:42:53 +0000 (0:00:00.171) 0:00:32.065 ******* 2026-03-11 00:42:55.048184 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.048195 | orchestrator | 2026-03-11 00:42:55.048205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:55.048216 | orchestrator | Wednesday 11 March 2026 00:42:53 +0000 (0:00:00.171) 0:00:32.237 ******* 2026-03-11 00:42:55.048227 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-11 00:42:55.048238 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-11 00:42:55.048250 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-11 00:42:55.048261 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-11 00:42:55.048271 | orchestrator | 2026-03-11 00:42:55.048282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:55.048293 | orchestrator | Wednesday 11 March 2026 00:42:54 +0000 (0:00:00.577) 0:00:32.815 ******* 2026-03-11 00:42:55.048304 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.048314 | orchestrator | 2026-03-11 00:42:55.048373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:55.048385 | orchestrator | Wednesday 11 March 2026 00:42:54 +0000 (0:00:00.175) 0:00:32.990 ******* 2026-03-11 00:42:55.048396 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.048407 | orchestrator | 2026-03-11 00:42:55.048417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:55.048428 | orchestrator | Wednesday 11 March 2026 00:42:54 +0000 (0:00:00.173) 0:00:33.163 ******* 2026-03-11 00:42:55.048439 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.048450 | orchestrator | 2026-03-11 00:42:55.048461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:55.048472 | orchestrator | Wednesday 11 March 2026 00:42:54 +0000 (0:00:00.163) 0:00:33.327 ******* 2026-03-11 00:42:55.048482 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:55.048493 | orchestrator | 2026-03-11 00:42:55.048512 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-11 00:42:58.681633 | orchestrator | Wednesday 11 March 2026 00:42:55 +0000 (0:00:00.163) 0:00:33.491 ******* 2026-03-11 00:42:58.681751 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-11 00:42:58.681767 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-11 00:42:58.681780 | orchestrator | 2026-03-11 00:42:58.681793 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-11 00:42:58.681805 | orchestrator | Wednesday 11 March 2026 00:42:55 +0000 (0:00:00.144) 0:00:33.635 ******* 2026-03-11 00:42:58.681816 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:58.681828 | orchestrator | 2026-03-11 00:42:58.681839 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-11 00:42:58.681851 | orchestrator | Wednesday 11 March 2026 00:42:55 +0000 (0:00:00.105) 0:00:33.741 ******* 2026-03-11 00:42:58.681861 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:58.681872 | orchestrator | 2026-03-11 00:42:58.681883 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-11 00:42:58.681894 | orchestrator | Wednesday 11 March 2026 00:42:55 +0000 (0:00:00.109) 0:00:33.851 ******* 2026-03-11 00:42:58.681905 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:58.681916 | orchestrator | 2026-03-11 00:42:58.681927 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-11 00:42:58.681938 | orchestrator | Wednesday 11 March 2026 00:42:55 +0000 (0:00:00.345) 0:00:34.196 ******* 2026-03-11 00:42:58.681949 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:42:58.681961 | orchestrator | 2026-03-11 00:42:58.681972 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-11 00:42:58.681984 | orchestrator | Wednesday 11 March 2026 00:42:55 +0000 (0:00:00.106) 0:00:34.303 ******* 2026-03-11 00:42:58.681996 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'}}) 2026-03-11 00:42:58.682008 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '12aec0f2-63b1-5667-a447-7095f264ece1'}}) 2026-03-11 00:42:58.682119 | orchestrator | 2026-03-11 00:42:58.682132 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-11 00:42:58.682145 | orchestrator | Wednesday 11 March 2026 00:42:55 +0000 (0:00:00.129) 0:00:34.433 ******* 2026-03-11 00:42:58.682160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'}})  2026-03-11 00:42:58.682175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '12aec0f2-63b1-5667-a447-7095f264ece1'}})  2026-03-11 00:42:58.682188 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:58.682201 | orchestrator | 2026-03-11 00:42:58.682213 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-11 00:42:58.682226 | orchestrator | Wednesday 11 March 2026 00:42:56 +0000 (0:00:00.121) 0:00:34.554 ******* 2026-03-11 00:42:58.682238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'}})  2026-03-11 00:42:58.682272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '12aec0f2-63b1-5667-a447-7095f264ece1'}})  2026-03-11 00:42:58.682285 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:58.682297 | orchestrator | 2026-03-11 00:42:58.682310 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-11 00:42:58.682322 | orchestrator | Wednesday 11 March 2026 00:42:56 +0000 (0:00:00.125) 0:00:34.679 ******* 2026-03-11 00:42:58.682356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'}})  2026-03-11 00:42:58.682369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '12aec0f2-63b1-5667-a447-7095f264ece1'}})  2026-03-11 00:42:58.682382 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:58.682394 | orchestrator | 2026-03-11 00:42:58.682406 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-11 00:42:58.682418 | orchestrator | Wednesday 11 March 2026 00:42:56 +0000 (0:00:00.135) 0:00:34.815 ******* 2026-03-11 00:42:58.682431 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:42:58.682443 | orchestrator | 2026-03-11 00:42:58.682456 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-11 00:42:58.682468 | orchestrator | Wednesday 11 March 2026 00:42:56 +0000 (0:00:00.111) 0:00:34.926 ******* 2026-03-11 00:42:58.682481 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:42:58.682494 | orchestrator | 2026-03-11 00:42:58.682505 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-11 00:42:58.682515 | orchestrator | Wednesday 11 March 2026 00:42:56 +0000 (0:00:00.109) 0:00:35.036 ******* 2026-03-11 00:42:58.682526 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:58.682537 | orchestrator | 2026-03-11 00:42:58.682548 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-11 00:42:58.682559 | orchestrator | Wednesday 11 March 2026 00:42:56 +0000 (0:00:00.102) 0:00:35.139 ******* 2026-03-11 00:42:58.682570 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:58.682581 | orchestrator | 2026-03-11 00:42:58.682592 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-11 00:42:58.682603 | orchestrator | Wednesday 11 March 2026 00:42:56 +0000 (0:00:00.106) 0:00:35.246 ******* 2026-03-11 00:42:58.682614 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:58.682625 | orchestrator | 2026-03-11 00:42:58.682635 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-11 00:42:58.682646 | orchestrator | Wednesday 11 March 2026 00:42:56 +0000 (0:00:00.160) 0:00:35.407 ******* 2026-03-11 00:42:58.682658 | orchestrator | ok: [testbed-node-5] => { 2026-03-11 00:42:58.682668 | orchestrator |  "ceph_osd_devices": { 2026-03-11 00:42:58.682684 | orchestrator |  "sdb": { 2026-03-11 00:42:58.682733 | orchestrator |  "osd_lvm_uuid": "5d149e3f-abc8-57c5-b2f4-c991fc87e4f9" 2026-03-11 00:42:58.682762 | orchestrator |  }, 2026-03-11 00:42:58.682780 | orchestrator |  "sdc": { 2026-03-11 00:42:58.682799 | orchestrator |  "osd_lvm_uuid": "12aec0f2-63b1-5667-a447-7095f264ece1" 2026-03-11 00:42:58.682816 | orchestrator |  } 2026-03-11 00:42:58.682834 | orchestrator |  } 2026-03-11 00:42:58.682852 | orchestrator | } 2026-03-11 00:42:58.682870 | orchestrator | 2026-03-11 00:42:58.682889 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-11 00:42:58.682908 | orchestrator | Wednesday 11 March 2026 00:42:57 +0000 (0:00:00.142) 0:00:35.549 ******* 2026-03-11 00:42:58.682927 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:58.682945 | orchestrator | 2026-03-11 00:42:58.682963 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-11 00:42:58.682975 | orchestrator | Wednesday 11 March 2026 00:42:57 +0000 (0:00:00.254) 0:00:35.804 ******* 2026-03-11 00:42:58.682986 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:58.683009 | orchestrator | 2026-03-11 00:42:58.683056 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-11 00:42:58.683076 | orchestrator | Wednesday 11 March 2026 00:42:57 +0000 (0:00:00.104) 0:00:35.908 ******* 2026-03-11 00:42:58.683091 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:58.683113 | orchestrator | 2026-03-11 00:42:58.683138 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-11 00:42:58.683156 | orchestrator | Wednesday 11 March 2026 00:42:57 +0000 (0:00:00.116) 0:00:36.024 ******* 2026-03-11 00:42:58.683172 | orchestrator | changed: [testbed-node-5] => { 2026-03-11 00:42:58.683190 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-11 00:42:58.683209 | orchestrator |  "ceph_osd_devices": { 2026-03-11 00:42:58.683228 | orchestrator |  "sdb": { 2026-03-11 00:42:58.683248 | orchestrator |  "osd_lvm_uuid": "5d149e3f-abc8-57c5-b2f4-c991fc87e4f9" 2026-03-11 00:42:58.683267 | orchestrator |  }, 2026-03-11 00:42:58.683281 | orchestrator |  "sdc": { 2026-03-11 00:42:58.683293 | orchestrator |  "osd_lvm_uuid": "12aec0f2-63b1-5667-a447-7095f264ece1" 2026-03-11 00:42:58.683304 | orchestrator |  } 2026-03-11 00:42:58.683323 | orchestrator |  }, 2026-03-11 00:42:58.683341 | orchestrator |  "lvm_volumes": [ 2026-03-11 00:42:58.683359 | orchestrator |  { 2026-03-11 00:42:58.683375 | orchestrator |  "data": "osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9", 2026-03-11 00:42:58.683391 | orchestrator |  "data_vg": "ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9" 2026-03-11 00:42:58.683408 | orchestrator |  }, 2026-03-11 00:42:58.683426 | orchestrator |  { 2026-03-11 00:42:58.683445 | orchestrator |  "data": "osd-block-12aec0f2-63b1-5667-a447-7095f264ece1", 2026-03-11 00:42:58.683464 | orchestrator |  "data_vg": "ceph-12aec0f2-63b1-5667-a447-7095f264ece1" 2026-03-11 00:42:58.683484 | orchestrator |  } 2026-03-11 00:42:58.683502 | orchestrator |  ] 2026-03-11 00:42:58.683527 | orchestrator |  } 2026-03-11 00:42:58.683539 | orchestrator | } 2026-03-11 00:42:58.683550 | orchestrator | 2026-03-11 00:42:58.683561 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-11 00:42:58.683572 | orchestrator | Wednesday 11 March 2026 00:42:57 +0000 (0:00:00.177) 0:00:36.201 ******* 2026-03-11 00:42:58.683583 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-11 00:42:58.683594 | orchestrator | 2026-03-11 00:42:58.683605 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:42:58.683616 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-11 00:42:58.683630 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-11 00:42:58.683641 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-11 00:42:58.683652 | orchestrator | 2026-03-11 00:42:58.683663 | orchestrator | 2026-03-11 00:42:58.683673 | orchestrator | 2026-03-11 00:42:58.683684 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:42:58.683695 | orchestrator | Wednesday 11 March 2026 00:42:58 +0000 (0:00:00.916) 0:00:37.118 ******* 2026-03-11 00:42:58.683705 | orchestrator | =============================================================================== 2026-03-11 00:42:58.683716 | orchestrator | Write configuration file ------------------------------------------------ 3.59s 2026-03-11 00:42:58.683736 | orchestrator | Add known links to the list of available block devices ------------------ 1.10s 2026-03-11 00:42:58.683754 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2026-03-11 00:42:58.683772 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2026-03-11 00:42:58.683806 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.92s 2026-03-11 00:42:58.683826 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-03-11 00:42:58.683846 | orchestrator | Print configuration data ------------------------------------------------ 0.74s 2026-03-11 00:42:58.683864 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-03-11 00:42:58.683880 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-03-11 00:42:58.683891 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-03-11 00:42:58.683902 | orchestrator | Get initial list of available block devices ----------------------------- 0.61s 2026-03-11 00:42:58.683913 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-03-11 00:42:58.683924 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.60s 2026-03-11 00:42:58.683947 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.59s 2026-03-11 00:42:58.916295 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2026-03-11 00:42:58.916431 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2026-03-11 00:42:58.916444 | orchestrator | Add known partitions to the list of available block devices ------------- 0.51s 2026-03-11 00:42:58.916454 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2026-03-11 00:42:58.916463 | orchestrator | Add known partitions to the list of available block devices ------------- 0.48s 2026-03-11 00:42:58.916473 | orchestrator | Print WAL devices ------------------------------------------------------- 0.48s 2026-03-11 00:43:21.035792 | orchestrator | 2026-03-11 00:43:21 | INFO  | Task 9164bf52-b1a4-46a6-b12f-80f944efe13a (sync inventory) is running in background. Output coming soon. 2026-03-11 00:43:48.337871 | orchestrator | 2026-03-11 00:43:22 | INFO  | Starting group_vars file reorganization 2026-03-11 00:43:48.338001 | orchestrator | 2026-03-11 00:43:22 | INFO  | Moved 0 file(s) to their respective directories 2026-03-11 00:43:48.338046 | orchestrator | 2026-03-11 00:43:22 | INFO  | Group_vars file reorganization completed 2026-03-11 00:43:48.338054 | orchestrator | 2026-03-11 00:43:25 | INFO  | Starting variable preparation from inventory 2026-03-11 00:43:48.338061 | orchestrator | 2026-03-11 00:43:28 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-11 00:43:48.338068 | orchestrator | 2026-03-11 00:43:28 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-11 00:43:48.338091 | orchestrator | 2026-03-11 00:43:28 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-11 00:43:48.338098 | orchestrator | 2026-03-11 00:43:28 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-11 00:43:48.338104 | orchestrator | 2026-03-11 00:43:28 | INFO  | Variable preparation completed 2026-03-11 00:43:48.338110 | orchestrator | 2026-03-11 00:43:29 | INFO  | Starting inventory overwrite handling 2026-03-11 00:43:48.338115 | orchestrator | 2026-03-11 00:43:29 | INFO  | Handling group overwrites in 99-overwrite 2026-03-11 00:43:48.338124 | orchestrator | 2026-03-11 00:43:29 | INFO  | Removing group frr:children from 60-generic 2026-03-11 00:43:48.338130 | orchestrator | 2026-03-11 00:43:29 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-11 00:43:48.338136 | orchestrator | 2026-03-11 00:43:29 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-11 00:43:48.338142 | orchestrator | 2026-03-11 00:43:29 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-11 00:43:48.338147 | orchestrator | 2026-03-11 00:43:29 | INFO  | Handling group overwrites in 20-roles 2026-03-11 00:43:48.338153 | orchestrator | 2026-03-11 00:43:29 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-11 00:43:48.338175 | orchestrator | 2026-03-11 00:43:29 | INFO  | Removed 5 group(s) in total 2026-03-11 00:43:48.338185 | orchestrator | 2026-03-11 00:43:29 | INFO  | Inventory overwrite handling completed 2026-03-11 00:43:48.338194 | orchestrator | 2026-03-11 00:43:31 | INFO  | Starting merge of inventory files 2026-03-11 00:43:48.338203 | orchestrator | 2026-03-11 00:43:31 | INFO  | Inventory files merged successfully 2026-03-11 00:43:48.338213 | orchestrator | 2026-03-11 00:43:35 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-11 00:43:48.338222 | orchestrator | 2026-03-11 00:43:47 | INFO  | Successfully wrote ClusterShell configuration 2026-03-11 00:43:48.338231 | orchestrator | [master 29d1715] 2026-03-11-00-43 2026-03-11 00:43:48.338241 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-11 00:43:50.561363 | orchestrator | 2026-03-11 00:43:50 | INFO  | Task f13adac8-bfb0-4295-adf2-52ed93b3335c (ceph-create-lvm-devices) was prepared for execution. 2026-03-11 00:43:50.561443 | orchestrator | 2026-03-11 00:43:50 | INFO  | It takes a moment until task f13adac8-bfb0-4295-adf2-52ed93b3335c (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-11 00:44:01.955228 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-11 00:44:01.955321 | orchestrator | 2.16.14 2026-03-11 00:44:01.955334 | orchestrator | 2026-03-11 00:44:01.955345 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-11 00:44:01.955355 | orchestrator | 2026-03-11 00:44:01.955364 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-11 00:44:01.955374 | orchestrator | Wednesday 11 March 2026 00:43:54 +0000 (0:00:00.275) 0:00:00.275 ******* 2026-03-11 00:44:01.955383 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-11 00:44:01.955393 | orchestrator | 2026-03-11 00:44:01.955402 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-11 00:44:01.955411 | orchestrator | Wednesday 11 March 2026 00:43:55 +0000 (0:00:00.236) 0:00:00.511 ******* 2026-03-11 00:44:01.955419 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:44:01.955428 | orchestrator | 2026-03-11 00:44:01.955437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:01.955447 | orchestrator | Wednesday 11 March 2026 00:43:55 +0000 (0:00:00.194) 0:00:00.706 ******* 2026-03-11 00:44:01.955456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-11 00:44:01.955465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-11 00:44:01.955473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-11 00:44:01.955482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-11 00:44:01.955491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-11 00:44:01.955499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-11 00:44:01.955508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-11 00:44:01.955517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-11 00:44:01.955525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-11 00:44:01.955534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-11 00:44:01.955543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-11 00:44:01.955551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-11 00:44:01.955560 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-11 00:44:01.955589 | orchestrator | 2026-03-11 00:44:01.955599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:01.955607 | orchestrator | Wednesday 11 March 2026 00:43:55 +0000 (0:00:00.441) 0:00:01.147 ******* 2026-03-11 00:44:01.955616 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:01.955625 | orchestrator | 2026-03-11 00:44:01.955633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:01.955642 | orchestrator | Wednesday 11 March 2026 00:43:55 +0000 (0:00:00.219) 0:00:01.367 ******* 2026-03-11 00:44:01.955650 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:01.955659 | orchestrator | 2026-03-11 00:44:01.955668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:01.955676 | orchestrator | Wednesday 11 March 2026 00:43:56 +0000 (0:00:00.201) 0:00:01.568 ******* 2026-03-11 00:44:01.955685 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:01.955693 | orchestrator | 2026-03-11 00:44:01.955702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:01.955711 | orchestrator | Wednesday 11 March 2026 00:43:56 +0000 (0:00:00.153) 0:00:01.722 ******* 2026-03-11 00:44:01.955719 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:01.955728 | orchestrator | 2026-03-11 00:44:01.955736 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:01.955745 | orchestrator | Wednesday 11 March 2026 00:43:56 +0000 (0:00:00.184) 0:00:01.906 ******* 2026-03-11 00:44:01.955753 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:01.955762 | orchestrator | 2026-03-11 00:44:01.955770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:01.955781 | orchestrator | Wednesday 11 March 2026 00:43:56 +0000 (0:00:00.174) 0:00:02.080 ******* 2026-03-11 00:44:01.955790 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:01.955800 | orchestrator | 2026-03-11 00:44:01.955810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:01.955820 | orchestrator | Wednesday 11 March 2026 00:43:56 +0000 (0:00:00.181) 0:00:02.262 ******* 2026-03-11 00:44:01.955830 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:01.955840 | orchestrator | 2026-03-11 00:44:01.955849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:01.955859 | orchestrator | Wednesday 11 March 2026 00:43:57 +0000 (0:00:00.187) 0:00:02.450 ******* 2026-03-11 00:44:01.955870 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:01.955880 | orchestrator | 2026-03-11 00:44:01.955890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:01.955899 | orchestrator | Wednesday 11 March 2026 00:43:57 +0000 (0:00:00.168) 0:00:02.618 ******* 2026-03-11 00:44:01.955910 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1) 2026-03-11 00:44:01.955921 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1) 2026-03-11 00:44:01.955930 | orchestrator | 2026-03-11 00:44:01.955962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:01.955988 | orchestrator | Wednesday 11 March 2026 00:43:57 +0000 (0:00:00.374) 0:00:02.992 ******* 2026-03-11 00:44:01.955999 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bb163787-5642-41ea-bb50-14394c4239c7) 2026-03-11 00:44:01.956010 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bb163787-5642-41ea-bb50-14394c4239c7) 2026-03-11 00:44:01.956020 | orchestrator | 2026-03-11 00:44:01.956031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:01.956041 | orchestrator | Wednesday 11 March 2026 00:43:58 +0000 (0:00:00.645) 0:00:03.637 ******* 2026-03-11 00:44:01.956051 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ac11d5e4-d53a-4ea4-b7b7-81bfc32957e4) 2026-03-11 00:44:01.956061 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ac11d5e4-d53a-4ea4-b7b7-81bfc32957e4) 2026-03-11 00:44:01.956077 | orchestrator | 2026-03-11 00:44:01.956088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:01.956098 | orchestrator | Wednesday 11 March 2026 00:43:58 +0000 (0:00:00.660) 0:00:04.297 ******* 2026-03-11 00:44:01.956108 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ef09bb17-59a8-4317-bed7-0146c94a1062) 2026-03-11 00:44:01.956118 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ef09bb17-59a8-4317-bed7-0146c94a1062) 2026-03-11 00:44:01.956128 | orchestrator | 2026-03-11 00:44:01.956138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:01.956148 | orchestrator | Wednesday 11 March 2026 00:43:59 +0000 (0:00:00.876) 0:00:05.174 ******* 2026-03-11 00:44:01.956159 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-11 00:44:01.956168 | orchestrator | 2026-03-11 00:44:01.956177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:01.956186 | orchestrator | Wednesday 11 March 2026 00:44:00 +0000 (0:00:00.331) 0:00:05.505 ******* 2026-03-11 00:44:01.956194 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-11 00:44:01.956203 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-11 00:44:01.956212 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-11 00:44:01.956237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-11 00:44:01.956246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-11 00:44:01.956255 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-11 00:44:01.956264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-11 00:44:01.956273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-11 00:44:01.956282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-11 00:44:01.956290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-11 00:44:01.956299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-11 00:44:01.956312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-11 00:44:01.956321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-11 00:44:01.956330 | orchestrator | 2026-03-11 00:44:01.956339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:01.956348 | orchestrator | Wednesday 11 March 2026 00:44:00 +0000 (0:00:00.442) 0:00:05.947 ******* 2026-03-11 00:44:01.956357 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:01.956366 | orchestrator | 2026-03-11 00:44:01.956375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:01.956383 | orchestrator | Wednesday 11 March 2026 00:44:00 +0000 (0:00:00.197) 0:00:06.144 ******* 2026-03-11 00:44:01.956392 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:01.956401 | orchestrator | 2026-03-11 00:44:01.956410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:01.956419 | orchestrator | Wednesday 11 March 2026 00:44:00 +0000 (0:00:00.201) 0:00:06.346 ******* 2026-03-11 00:44:01.956428 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:01.956436 | orchestrator | 2026-03-11 00:44:01.956445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:01.956454 | orchestrator | Wednesday 11 March 2026 00:44:01 +0000 (0:00:00.206) 0:00:06.553 ******* 2026-03-11 00:44:01.956463 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:01.956477 | orchestrator | 2026-03-11 00:44:01.956486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:01.956494 | orchestrator | Wednesday 11 March 2026 00:44:01 +0000 (0:00:00.197) 0:00:06.750 ******* 2026-03-11 00:44:01.956503 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:01.956512 | orchestrator | 2026-03-11 00:44:01.956521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:01.956530 | orchestrator | Wednesday 11 March 2026 00:44:01 +0000 (0:00:00.208) 0:00:06.959 ******* 2026-03-11 00:44:01.956538 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:01.956547 | orchestrator | 2026-03-11 00:44:01.956556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:01.956565 | orchestrator | Wednesday 11 March 2026 00:44:01 +0000 (0:00:00.177) 0:00:07.137 ******* 2026-03-11 00:44:01.956574 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:01.956582 | orchestrator | 2026-03-11 00:44:01.956596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:09.254473 | orchestrator | Wednesday 11 March 2026 00:44:01 +0000 (0:00:00.186) 0:00:07.323 ******* 2026-03-11 00:44:09.254539 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.254546 | orchestrator | 2026-03-11 00:44:09.254551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:09.254556 | orchestrator | Wednesday 11 March 2026 00:44:02 +0000 (0:00:00.173) 0:00:07.497 ******* 2026-03-11 00:44:09.254560 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-11 00:44:09.254565 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-11 00:44:09.254570 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-11 00:44:09.254574 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-11 00:44:09.254578 | orchestrator | 2026-03-11 00:44:09.254582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:09.254586 | orchestrator | Wednesday 11 March 2026 00:44:02 +0000 (0:00:00.859) 0:00:08.356 ******* 2026-03-11 00:44:09.254590 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.254593 | orchestrator | 2026-03-11 00:44:09.254597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:09.254601 | orchestrator | Wednesday 11 March 2026 00:44:03 +0000 (0:00:00.181) 0:00:08.538 ******* 2026-03-11 00:44:09.254605 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.254609 | orchestrator | 2026-03-11 00:44:09.254612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:09.254616 | orchestrator | Wednesday 11 March 2026 00:44:03 +0000 (0:00:00.188) 0:00:08.727 ******* 2026-03-11 00:44:09.254620 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.254624 | orchestrator | 2026-03-11 00:44:09.254628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:09.254631 | orchestrator | Wednesday 11 March 2026 00:44:03 +0000 (0:00:00.178) 0:00:08.905 ******* 2026-03-11 00:44:09.254635 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.254639 | orchestrator | 2026-03-11 00:44:09.254643 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-11 00:44:09.254646 | orchestrator | Wednesday 11 March 2026 00:44:03 +0000 (0:00:00.197) 0:00:09.103 ******* 2026-03-11 00:44:09.254650 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.254654 | orchestrator | 2026-03-11 00:44:09.254658 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-11 00:44:09.254661 | orchestrator | Wednesday 11 March 2026 00:44:03 +0000 (0:00:00.129) 0:00:09.233 ******* 2026-03-11 00:44:09.254666 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1f24027a-cb62-5112-a2b4-0ff1a158a780'}}) 2026-03-11 00:44:09.254670 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '930a51f3-082d-5f24-af57-1314a0ff4b68'}}) 2026-03-11 00:44:09.254674 | orchestrator | 2026-03-11 00:44:09.254677 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-11 00:44:09.254697 | orchestrator | Wednesday 11 March 2026 00:44:04 +0000 (0:00:00.187) 0:00:09.420 ******* 2026-03-11 00:44:09.254701 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'}) 2026-03-11 00:44:09.254706 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'}) 2026-03-11 00:44:09.254710 | orchestrator | 2026-03-11 00:44:09.254714 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-11 00:44:09.254717 | orchestrator | Wednesday 11 March 2026 00:44:05 +0000 (0:00:01.884) 0:00:11.305 ******* 2026-03-11 00:44:09.254721 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:09.254726 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:09.254730 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.254733 | orchestrator | 2026-03-11 00:44:09.254737 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-11 00:44:09.254741 | orchestrator | Wednesday 11 March 2026 00:44:06 +0000 (0:00:00.147) 0:00:11.453 ******* 2026-03-11 00:44:09.254745 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'}) 2026-03-11 00:44:09.254749 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'}) 2026-03-11 00:44:09.254752 | orchestrator | 2026-03-11 00:44:09.254756 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-11 00:44:09.254760 | orchestrator | Wednesday 11 March 2026 00:44:07 +0000 (0:00:01.426) 0:00:12.880 ******* 2026-03-11 00:44:09.254764 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:09.254768 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:09.254772 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.254775 | orchestrator | 2026-03-11 00:44:09.254779 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-11 00:44:09.254783 | orchestrator | Wednesday 11 March 2026 00:44:07 +0000 (0:00:00.124) 0:00:13.004 ******* 2026-03-11 00:44:09.254796 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.254800 | orchestrator | 2026-03-11 00:44:09.254804 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-11 00:44:09.254807 | orchestrator | Wednesday 11 March 2026 00:44:07 +0000 (0:00:00.120) 0:00:13.125 ******* 2026-03-11 00:44:09.254811 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:09.254815 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:09.254819 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.254823 | orchestrator | 2026-03-11 00:44:09.254826 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-11 00:44:09.254830 | orchestrator | Wednesday 11 March 2026 00:44:08 +0000 (0:00:00.265) 0:00:13.390 ******* 2026-03-11 00:44:09.254834 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.254837 | orchestrator | 2026-03-11 00:44:09.254841 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-11 00:44:09.254845 | orchestrator | Wednesday 11 March 2026 00:44:08 +0000 (0:00:00.127) 0:00:13.517 ******* 2026-03-11 00:44:09.254852 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:09.254856 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:09.254860 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.254864 | orchestrator | 2026-03-11 00:44:09.254867 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-11 00:44:09.254871 | orchestrator | Wednesday 11 March 2026 00:44:08 +0000 (0:00:00.148) 0:00:13.665 ******* 2026-03-11 00:44:09.254875 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.254879 | orchestrator | 2026-03-11 00:44:09.254882 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-11 00:44:09.254886 | orchestrator | Wednesday 11 March 2026 00:44:08 +0000 (0:00:00.121) 0:00:13.787 ******* 2026-03-11 00:44:09.254890 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:09.254894 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:09.254898 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.254901 | orchestrator | 2026-03-11 00:44:09.254905 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-11 00:44:09.254909 | orchestrator | Wednesday 11 March 2026 00:44:08 +0000 (0:00:00.142) 0:00:13.929 ******* 2026-03-11 00:44:09.254912 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:44:09.254916 | orchestrator | 2026-03-11 00:44:09.254920 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-11 00:44:09.254965 | orchestrator | Wednesday 11 March 2026 00:44:08 +0000 (0:00:00.121) 0:00:14.050 ******* 2026-03-11 00:44:09.254972 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:09.254976 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:09.254980 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.254984 | orchestrator | 2026-03-11 00:44:09.254988 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-11 00:44:09.254992 | orchestrator | Wednesday 11 March 2026 00:44:08 +0000 (0:00:00.143) 0:00:14.194 ******* 2026-03-11 00:44:09.254995 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:09.254999 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:09.255003 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.255007 | orchestrator | 2026-03-11 00:44:09.255011 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-11 00:44:09.255014 | orchestrator | Wednesday 11 March 2026 00:44:08 +0000 (0:00:00.153) 0:00:14.348 ******* 2026-03-11 00:44:09.255018 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:09.255022 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:09.255026 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.255030 | orchestrator | 2026-03-11 00:44:09.255034 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-11 00:44:09.255037 | orchestrator | Wednesday 11 March 2026 00:44:09 +0000 (0:00:00.160) 0:00:14.509 ******* 2026-03-11 00:44:09.255045 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:09.255049 | orchestrator | 2026-03-11 00:44:09.255053 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-11 00:44:09.255060 | orchestrator | Wednesday 11 March 2026 00:44:09 +0000 (0:00:00.115) 0:00:14.624 ******* 2026-03-11 00:44:15.189838 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.189908 | orchestrator | 2026-03-11 00:44:15.189915 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-11 00:44:15.189972 | orchestrator | Wednesday 11 March 2026 00:44:09 +0000 (0:00:00.106) 0:00:14.731 ******* 2026-03-11 00:44:15.189977 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.189981 | orchestrator | 2026-03-11 00:44:15.189985 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-11 00:44:15.189989 | orchestrator | Wednesday 11 March 2026 00:44:09 +0000 (0:00:00.114) 0:00:14.846 ******* 2026-03-11 00:44:15.189994 | orchestrator | ok: [testbed-node-3] => { 2026-03-11 00:44:15.189999 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-11 00:44:15.190003 | orchestrator | } 2026-03-11 00:44:15.190007 | orchestrator | 2026-03-11 00:44:15.190045 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-11 00:44:15.190050 | orchestrator | Wednesday 11 March 2026 00:44:09 +0000 (0:00:00.277) 0:00:15.123 ******* 2026-03-11 00:44:15.190054 | orchestrator | ok: [testbed-node-3] => { 2026-03-11 00:44:15.190058 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-11 00:44:15.190062 | orchestrator | } 2026-03-11 00:44:15.190066 | orchestrator | 2026-03-11 00:44:15.190070 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-11 00:44:15.190074 | orchestrator | Wednesday 11 March 2026 00:44:09 +0000 (0:00:00.127) 0:00:15.251 ******* 2026-03-11 00:44:15.190078 | orchestrator | ok: [testbed-node-3] => { 2026-03-11 00:44:15.190083 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-11 00:44:15.190087 | orchestrator | } 2026-03-11 00:44:15.190091 | orchestrator | 2026-03-11 00:44:15.190095 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-11 00:44:15.190099 | orchestrator | Wednesday 11 March 2026 00:44:09 +0000 (0:00:00.123) 0:00:15.374 ******* 2026-03-11 00:44:15.190102 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:44:15.190106 | orchestrator | 2026-03-11 00:44:15.190111 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-11 00:44:15.190117 | orchestrator | Wednesday 11 March 2026 00:44:10 +0000 (0:00:00.674) 0:00:16.049 ******* 2026-03-11 00:44:15.190124 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:44:15.190128 | orchestrator | 2026-03-11 00:44:15.190132 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-11 00:44:15.190135 | orchestrator | Wednesday 11 March 2026 00:44:11 +0000 (0:00:00.488) 0:00:16.538 ******* 2026-03-11 00:44:15.190139 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:44:15.190143 | orchestrator | 2026-03-11 00:44:15.190147 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-11 00:44:15.190151 | orchestrator | Wednesday 11 March 2026 00:44:11 +0000 (0:00:00.539) 0:00:17.078 ******* 2026-03-11 00:44:15.190155 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:44:15.190159 | orchestrator | 2026-03-11 00:44:15.190163 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-11 00:44:15.190167 | orchestrator | Wednesday 11 March 2026 00:44:11 +0000 (0:00:00.121) 0:00:17.199 ******* 2026-03-11 00:44:15.190170 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190174 | orchestrator | 2026-03-11 00:44:15.190178 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-11 00:44:15.190182 | orchestrator | Wednesday 11 March 2026 00:44:11 +0000 (0:00:00.119) 0:00:17.318 ******* 2026-03-11 00:44:15.190186 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190190 | orchestrator | 2026-03-11 00:44:15.190194 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-11 00:44:15.190213 | orchestrator | Wednesday 11 March 2026 00:44:12 +0000 (0:00:00.094) 0:00:17.413 ******* 2026-03-11 00:44:15.190228 | orchestrator | ok: [testbed-node-3] => { 2026-03-11 00:44:15.190232 | orchestrator |  "vgs_report": { 2026-03-11 00:44:15.190236 | orchestrator |  "vg": [] 2026-03-11 00:44:15.190240 | orchestrator |  } 2026-03-11 00:44:15.190244 | orchestrator | } 2026-03-11 00:44:15.190248 | orchestrator | 2026-03-11 00:44:15.190253 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-11 00:44:15.190256 | orchestrator | Wednesday 11 March 2026 00:44:12 +0000 (0:00:00.132) 0:00:17.545 ******* 2026-03-11 00:44:15.190260 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190264 | orchestrator | 2026-03-11 00:44:15.190268 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-11 00:44:15.190272 | orchestrator | Wednesday 11 March 2026 00:44:12 +0000 (0:00:00.135) 0:00:17.680 ******* 2026-03-11 00:44:15.190275 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190279 | orchestrator | 2026-03-11 00:44:15.190283 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-11 00:44:15.190287 | orchestrator | Wednesday 11 March 2026 00:44:12 +0000 (0:00:00.148) 0:00:17.829 ******* 2026-03-11 00:44:15.190291 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190294 | orchestrator | 2026-03-11 00:44:15.190298 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-11 00:44:15.190302 | orchestrator | Wednesday 11 March 2026 00:44:12 +0000 (0:00:00.299) 0:00:18.129 ******* 2026-03-11 00:44:15.190306 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190310 | orchestrator | 2026-03-11 00:44:15.190314 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-11 00:44:15.190318 | orchestrator | Wednesday 11 March 2026 00:44:12 +0000 (0:00:00.120) 0:00:18.249 ******* 2026-03-11 00:44:15.190322 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190325 | orchestrator | 2026-03-11 00:44:15.190329 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-11 00:44:15.190333 | orchestrator | Wednesday 11 March 2026 00:44:13 +0000 (0:00:00.130) 0:00:18.380 ******* 2026-03-11 00:44:15.190337 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190341 | orchestrator | 2026-03-11 00:44:15.190344 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-11 00:44:15.190348 | orchestrator | Wednesday 11 March 2026 00:44:13 +0000 (0:00:00.185) 0:00:18.565 ******* 2026-03-11 00:44:15.190352 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190356 | orchestrator | 2026-03-11 00:44:15.190360 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-11 00:44:15.190363 | orchestrator | Wednesday 11 March 2026 00:44:13 +0000 (0:00:00.136) 0:00:18.701 ******* 2026-03-11 00:44:15.190377 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190381 | orchestrator | 2026-03-11 00:44:15.190385 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-11 00:44:15.190389 | orchestrator | Wednesday 11 March 2026 00:44:13 +0000 (0:00:00.126) 0:00:18.827 ******* 2026-03-11 00:44:15.190393 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190396 | orchestrator | 2026-03-11 00:44:15.190400 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-11 00:44:15.190404 | orchestrator | Wednesday 11 March 2026 00:44:13 +0000 (0:00:00.109) 0:00:18.937 ******* 2026-03-11 00:44:15.190408 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190412 | orchestrator | 2026-03-11 00:44:15.190416 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-11 00:44:15.190421 | orchestrator | Wednesday 11 March 2026 00:44:13 +0000 (0:00:00.126) 0:00:19.063 ******* 2026-03-11 00:44:15.190425 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190429 | orchestrator | 2026-03-11 00:44:15.190433 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-11 00:44:15.190437 | orchestrator | Wednesday 11 March 2026 00:44:13 +0000 (0:00:00.112) 0:00:19.176 ******* 2026-03-11 00:44:15.190446 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190451 | orchestrator | 2026-03-11 00:44:15.190458 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-11 00:44:15.190464 | orchestrator | Wednesday 11 March 2026 00:44:13 +0000 (0:00:00.118) 0:00:19.295 ******* 2026-03-11 00:44:15.190468 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190472 | orchestrator | 2026-03-11 00:44:15.190476 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-11 00:44:15.190480 | orchestrator | Wednesday 11 March 2026 00:44:14 +0000 (0:00:00.151) 0:00:19.446 ******* 2026-03-11 00:44:15.190485 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190489 | orchestrator | 2026-03-11 00:44:15.190493 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-11 00:44:15.190497 | orchestrator | Wednesday 11 March 2026 00:44:14 +0000 (0:00:00.103) 0:00:19.550 ******* 2026-03-11 00:44:15.190502 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:15.190508 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:15.190513 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190517 | orchestrator | 2026-03-11 00:44:15.190521 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-11 00:44:15.190525 | orchestrator | Wednesday 11 March 2026 00:44:14 +0000 (0:00:00.252) 0:00:19.802 ******* 2026-03-11 00:44:15.190530 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:15.190534 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:15.190538 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190542 | orchestrator | 2026-03-11 00:44:15.190547 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-11 00:44:15.190551 | orchestrator | Wednesday 11 March 2026 00:44:14 +0000 (0:00:00.135) 0:00:19.937 ******* 2026-03-11 00:44:15.190556 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:15.190560 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:15.190565 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190569 | orchestrator | 2026-03-11 00:44:15.190574 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-11 00:44:15.190578 | orchestrator | Wednesday 11 March 2026 00:44:14 +0000 (0:00:00.139) 0:00:20.077 ******* 2026-03-11 00:44:15.190583 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:15.190587 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:15.190591 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190596 | orchestrator | 2026-03-11 00:44:15.190600 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-11 00:44:15.190605 | orchestrator | Wednesday 11 March 2026 00:44:14 +0000 (0:00:00.141) 0:00:20.218 ******* 2026-03-11 00:44:15.190609 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:15.190613 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:15.190621 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:15.190625 | orchestrator | 2026-03-11 00:44:15.190630 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-11 00:44:15.190638 | orchestrator | Wednesday 11 March 2026 00:44:15 +0000 (0:00:00.195) 0:00:20.414 ******* 2026-03-11 00:44:15.190645 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:19.915747 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:19.915867 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:19.915885 | orchestrator | 2026-03-11 00:44:19.915898 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-11 00:44:19.915995 | orchestrator | Wednesday 11 March 2026 00:44:15 +0000 (0:00:00.142) 0:00:20.557 ******* 2026-03-11 00:44:19.916009 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:19.916021 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:19.916033 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:19.916044 | orchestrator | 2026-03-11 00:44:19.916055 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-11 00:44:19.916067 | orchestrator | Wednesday 11 March 2026 00:44:15 +0000 (0:00:00.139) 0:00:20.697 ******* 2026-03-11 00:44:19.916078 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:19.916090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:19.916101 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:19.916112 | orchestrator | 2026-03-11 00:44:19.916123 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-11 00:44:19.916134 | orchestrator | Wednesday 11 March 2026 00:44:15 +0000 (0:00:00.131) 0:00:20.828 ******* 2026-03-11 00:44:19.916145 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:44:19.916157 | orchestrator | 2026-03-11 00:44:19.916168 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-11 00:44:19.916179 | orchestrator | Wednesday 11 March 2026 00:44:15 +0000 (0:00:00.513) 0:00:21.342 ******* 2026-03-11 00:44:19.916189 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:44:19.916200 | orchestrator | 2026-03-11 00:44:19.916211 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-11 00:44:19.916222 | orchestrator | Wednesday 11 March 2026 00:44:16 +0000 (0:00:00.506) 0:00:21.848 ******* 2026-03-11 00:44:19.916232 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:44:19.916243 | orchestrator | 2026-03-11 00:44:19.916257 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-11 00:44:19.916269 | orchestrator | Wednesday 11 March 2026 00:44:16 +0000 (0:00:00.142) 0:00:21.991 ******* 2026-03-11 00:44:19.916282 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'vg_name': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'}) 2026-03-11 00:44:19.916313 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'vg_name': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'}) 2026-03-11 00:44:19.916325 | orchestrator | 2026-03-11 00:44:19.916338 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-11 00:44:19.916350 | orchestrator | Wednesday 11 March 2026 00:44:16 +0000 (0:00:00.146) 0:00:22.137 ******* 2026-03-11 00:44:19.916363 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:19.916399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:19.916412 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:19.916424 | orchestrator | 2026-03-11 00:44:19.916438 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-11 00:44:19.916451 | orchestrator | Wednesday 11 March 2026 00:44:17 +0000 (0:00:00.291) 0:00:22.428 ******* 2026-03-11 00:44:19.916463 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:19.916476 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:19.916489 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:19.916502 | orchestrator | 2026-03-11 00:44:19.916515 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-11 00:44:19.916527 | orchestrator | Wednesday 11 March 2026 00:44:17 +0000 (0:00:00.192) 0:00:22.621 ******* 2026-03-11 00:44:19.916540 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'})  2026-03-11 00:44:19.916553 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'})  2026-03-11 00:44:19.916566 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:19.916578 | orchestrator | 2026-03-11 00:44:19.916591 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-11 00:44:19.916605 | orchestrator | Wednesday 11 March 2026 00:44:17 +0000 (0:00:00.145) 0:00:22.766 ******* 2026-03-11 00:44:19.916636 | orchestrator | ok: [testbed-node-3] => { 2026-03-11 00:44:19.916648 | orchestrator |  "lvm_report": { 2026-03-11 00:44:19.916660 | orchestrator |  "lv": [ 2026-03-11 00:44:19.916672 | orchestrator |  { 2026-03-11 00:44:19.916683 | orchestrator |  "lv_name": "osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780", 2026-03-11 00:44:19.916694 | orchestrator |  "vg_name": "ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780" 2026-03-11 00:44:19.916705 | orchestrator |  }, 2026-03-11 00:44:19.916716 | orchestrator |  { 2026-03-11 00:44:19.916727 | orchestrator |  "lv_name": "osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68", 2026-03-11 00:44:19.916738 | orchestrator |  "vg_name": "ceph-930a51f3-082d-5f24-af57-1314a0ff4b68" 2026-03-11 00:44:19.916749 | orchestrator |  } 2026-03-11 00:44:19.916759 | orchestrator |  ], 2026-03-11 00:44:19.916770 | orchestrator |  "pv": [ 2026-03-11 00:44:19.916781 | orchestrator |  { 2026-03-11 00:44:19.916792 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-11 00:44:19.916803 | orchestrator |  "vg_name": "ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780" 2026-03-11 00:44:19.916822 | orchestrator |  }, 2026-03-11 00:44:19.916841 | orchestrator |  { 2026-03-11 00:44:19.916860 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-11 00:44:19.916879 | orchestrator |  "vg_name": "ceph-930a51f3-082d-5f24-af57-1314a0ff4b68" 2026-03-11 00:44:19.916897 | orchestrator |  } 2026-03-11 00:44:19.916947 | orchestrator |  ] 2026-03-11 00:44:19.916967 | orchestrator |  } 2026-03-11 00:44:19.916986 | orchestrator | } 2026-03-11 00:44:19.917005 | orchestrator | 2026-03-11 00:44:19.917025 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-11 00:44:19.917045 | orchestrator | 2026-03-11 00:44:19.917063 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-11 00:44:19.917082 | orchestrator | Wednesday 11 March 2026 00:44:17 +0000 (0:00:00.260) 0:00:23.027 ******* 2026-03-11 00:44:19.917115 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-11 00:44:19.917136 | orchestrator | 2026-03-11 00:44:19.917155 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-11 00:44:19.917175 | orchestrator | Wednesday 11 March 2026 00:44:17 +0000 (0:00:00.239) 0:00:23.267 ******* 2026-03-11 00:44:19.917194 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:44:19.917213 | orchestrator | 2026-03-11 00:44:19.917225 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:19.917236 | orchestrator | Wednesday 11 March 2026 00:44:18 +0000 (0:00:00.222) 0:00:23.489 ******* 2026-03-11 00:44:19.917247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-11 00:44:19.917258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-11 00:44:19.917269 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-11 00:44:19.917279 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-11 00:44:19.917290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-11 00:44:19.917301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-11 00:44:19.917311 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-11 00:44:19.917330 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-11 00:44:19.917346 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-11 00:44:19.917365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-11 00:44:19.917383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-11 00:44:19.917401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-11 00:44:19.917419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-11 00:44:19.917437 | orchestrator | 2026-03-11 00:44:19.917455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:19.917474 | orchestrator | Wednesday 11 March 2026 00:44:18 +0000 (0:00:00.394) 0:00:23.884 ******* 2026-03-11 00:44:19.917494 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:19.917509 | orchestrator | 2026-03-11 00:44:19.917521 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:19.917531 | orchestrator | Wednesday 11 March 2026 00:44:18 +0000 (0:00:00.188) 0:00:24.072 ******* 2026-03-11 00:44:19.917542 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:19.917553 | orchestrator | 2026-03-11 00:44:19.917564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:19.917574 | orchestrator | Wednesday 11 March 2026 00:44:18 +0000 (0:00:00.188) 0:00:24.261 ******* 2026-03-11 00:44:19.917585 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:19.917596 | orchestrator | 2026-03-11 00:44:19.917607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:19.917617 | orchestrator | Wednesday 11 March 2026 00:44:19 +0000 (0:00:00.474) 0:00:24.735 ******* 2026-03-11 00:44:19.917628 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:19.917639 | orchestrator | 2026-03-11 00:44:19.917649 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:19.917660 | orchestrator | Wednesday 11 March 2026 00:44:19 +0000 (0:00:00.181) 0:00:24.917 ******* 2026-03-11 00:44:19.917671 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:19.917681 | orchestrator | 2026-03-11 00:44:19.917692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:19.917703 | orchestrator | Wednesday 11 March 2026 00:44:19 +0000 (0:00:00.191) 0:00:25.109 ******* 2026-03-11 00:44:19.917723 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:19.917733 | orchestrator | 2026-03-11 00:44:19.917755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:30.101524 | orchestrator | Wednesday 11 March 2026 00:44:19 +0000 (0:00:00.174) 0:00:25.283 ******* 2026-03-11 00:44:30.101628 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.101644 | orchestrator | 2026-03-11 00:44:30.101658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:30.101669 | orchestrator | Wednesday 11 March 2026 00:44:20 +0000 (0:00:00.173) 0:00:25.457 ******* 2026-03-11 00:44:30.101680 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.101691 | orchestrator | 2026-03-11 00:44:30.101702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:30.101713 | orchestrator | Wednesday 11 March 2026 00:44:20 +0000 (0:00:00.190) 0:00:25.648 ******* 2026-03-11 00:44:30.101724 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5) 2026-03-11 00:44:30.101736 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5) 2026-03-11 00:44:30.101748 | orchestrator | 2026-03-11 00:44:30.101758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:30.101769 | orchestrator | Wednesday 11 March 2026 00:44:20 +0000 (0:00:00.387) 0:00:26.035 ******* 2026-03-11 00:44:30.101780 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_747dd4bc-1e4a-4053-bdf0-887e0b92b80b) 2026-03-11 00:44:30.101791 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_747dd4bc-1e4a-4053-bdf0-887e0b92b80b) 2026-03-11 00:44:30.101802 | orchestrator | 2026-03-11 00:44:30.101812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:30.101823 | orchestrator | Wednesday 11 March 2026 00:44:21 +0000 (0:00:00.380) 0:00:26.416 ******* 2026-03-11 00:44:30.101834 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1656cb8a-d6e3-4504-aba1-0af808046f0d) 2026-03-11 00:44:30.101845 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1656cb8a-d6e3-4504-aba1-0af808046f0d) 2026-03-11 00:44:30.101856 | orchestrator | 2026-03-11 00:44:30.101866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:30.101877 | orchestrator | Wednesday 11 March 2026 00:44:21 +0000 (0:00:00.365) 0:00:26.781 ******* 2026-03-11 00:44:30.101888 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_37fe87c5-ca63-4522-b75b-0d9e996155b4) 2026-03-11 00:44:30.101899 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_37fe87c5-ca63-4522-b75b-0d9e996155b4) 2026-03-11 00:44:30.101939 | orchestrator | 2026-03-11 00:44:30.101950 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:30.101962 | orchestrator | Wednesday 11 March 2026 00:44:21 +0000 (0:00:00.581) 0:00:27.362 ******* 2026-03-11 00:44:30.101973 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-11 00:44:30.101984 | orchestrator | 2026-03-11 00:44:30.101995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:30.102006 | orchestrator | Wednesday 11 March 2026 00:44:22 +0000 (0:00:00.461) 0:00:27.824 ******* 2026-03-11 00:44:30.102052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-11 00:44:30.102066 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-11 00:44:30.102078 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-11 00:44:30.102091 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-11 00:44:30.102103 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-11 00:44:30.102135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-11 00:44:30.102171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-11 00:44:30.102183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-11 00:44:30.102197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-11 00:44:30.102209 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-11 00:44:30.102221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-11 00:44:30.102233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-11 00:44:30.102246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-11 00:44:30.102258 | orchestrator | 2026-03-11 00:44:30.102271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:30.102283 | orchestrator | Wednesday 11 March 2026 00:44:23 +0000 (0:00:00.687) 0:00:28.512 ******* 2026-03-11 00:44:30.102295 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.102307 | orchestrator | 2026-03-11 00:44:30.102319 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:30.102331 | orchestrator | Wednesday 11 March 2026 00:44:23 +0000 (0:00:00.164) 0:00:28.676 ******* 2026-03-11 00:44:30.102344 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.102355 | orchestrator | 2026-03-11 00:44:30.102368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:30.102380 | orchestrator | Wednesday 11 March 2026 00:44:23 +0000 (0:00:00.179) 0:00:28.855 ******* 2026-03-11 00:44:30.102393 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.102404 | orchestrator | 2026-03-11 00:44:30.102434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:30.102445 | orchestrator | Wednesday 11 March 2026 00:44:23 +0000 (0:00:00.163) 0:00:29.018 ******* 2026-03-11 00:44:30.102456 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.102467 | orchestrator | 2026-03-11 00:44:30.102478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:30.102489 | orchestrator | Wednesday 11 March 2026 00:44:23 +0000 (0:00:00.184) 0:00:29.202 ******* 2026-03-11 00:44:30.102500 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.102510 | orchestrator | 2026-03-11 00:44:30.102521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:30.102532 | orchestrator | Wednesday 11 March 2026 00:44:24 +0000 (0:00:00.196) 0:00:29.399 ******* 2026-03-11 00:44:30.102543 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.102553 | orchestrator | 2026-03-11 00:44:30.102564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:30.102575 | orchestrator | Wednesday 11 March 2026 00:44:24 +0000 (0:00:00.173) 0:00:29.573 ******* 2026-03-11 00:44:30.102586 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.102597 | orchestrator | 2026-03-11 00:44:30.102607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:30.102618 | orchestrator | Wednesday 11 March 2026 00:44:24 +0000 (0:00:00.173) 0:00:29.746 ******* 2026-03-11 00:44:30.102629 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.102640 | orchestrator | 2026-03-11 00:44:30.102651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:30.102662 | orchestrator | Wednesday 11 March 2026 00:44:24 +0000 (0:00:00.203) 0:00:29.950 ******* 2026-03-11 00:44:30.102672 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-11 00:44:30.102683 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-11 00:44:30.102695 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-11 00:44:30.102706 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-11 00:44:30.102717 | orchestrator | 2026-03-11 00:44:30.102728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:30.102747 | orchestrator | Wednesday 11 March 2026 00:44:25 +0000 (0:00:00.789) 0:00:30.740 ******* 2026-03-11 00:44:30.102758 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.102768 | orchestrator | 2026-03-11 00:44:30.102779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:30.102790 | orchestrator | Wednesday 11 March 2026 00:44:25 +0000 (0:00:00.200) 0:00:30.940 ******* 2026-03-11 00:44:30.102801 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.102811 | orchestrator | 2026-03-11 00:44:30.102822 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:30.102833 | orchestrator | Wednesday 11 March 2026 00:44:26 +0000 (0:00:00.489) 0:00:31.430 ******* 2026-03-11 00:44:30.102844 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.102854 | orchestrator | 2026-03-11 00:44:30.102865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:30.102876 | orchestrator | Wednesday 11 March 2026 00:44:26 +0000 (0:00:00.205) 0:00:31.636 ******* 2026-03-11 00:44:30.102887 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.102898 | orchestrator | 2026-03-11 00:44:30.102962 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-11 00:44:30.102989 | orchestrator | Wednesday 11 March 2026 00:44:26 +0000 (0:00:00.235) 0:00:31.871 ******* 2026-03-11 00:44:30.103008 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.103027 | orchestrator | 2026-03-11 00:44:30.103038 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-11 00:44:30.103049 | orchestrator | Wednesday 11 March 2026 00:44:26 +0000 (0:00:00.130) 0:00:32.001 ******* 2026-03-11 00:44:30.103060 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9a64462a-5614-5a25-979d-2f017565a0c4'}}) 2026-03-11 00:44:30.103072 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e6773b3-a2d9-5476-8e14-434a68284534'}}) 2026-03-11 00:44:30.103083 | orchestrator | 2026-03-11 00:44:30.103094 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-11 00:44:30.103104 | orchestrator | Wednesday 11 March 2026 00:44:26 +0000 (0:00:00.182) 0:00:32.184 ******* 2026-03-11 00:44:30.103116 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'}) 2026-03-11 00:44:30.103129 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'}) 2026-03-11 00:44:30.103140 | orchestrator | 2026-03-11 00:44:30.103151 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-11 00:44:30.103162 | orchestrator | Wednesday 11 March 2026 00:44:28 +0000 (0:00:01.819) 0:00:34.004 ******* 2026-03-11 00:44:30.103172 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:30.103184 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:30.103196 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:30.103206 | orchestrator | 2026-03-11 00:44:30.103217 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-11 00:44:30.103228 | orchestrator | Wednesday 11 March 2026 00:44:28 +0000 (0:00:00.144) 0:00:34.148 ******* 2026-03-11 00:44:30.103239 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'}) 2026-03-11 00:44:30.103258 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'}) 2026-03-11 00:44:35.423702 | orchestrator | 2026-03-11 00:44:35.423871 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-11 00:44:35.423990 | orchestrator | Wednesday 11 March 2026 00:44:30 +0000 (0:00:01.324) 0:00:35.472 ******* 2026-03-11 00:44:35.424002 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:35.424059 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:35.424080 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.424089 | orchestrator | 2026-03-11 00:44:35.424097 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-11 00:44:35.424114 | orchestrator | Wednesday 11 March 2026 00:44:30 +0000 (0:00:00.123) 0:00:35.596 ******* 2026-03-11 00:44:35.424288 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.424302 | orchestrator | 2026-03-11 00:44:35.424311 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-11 00:44:35.424320 | orchestrator | Wednesday 11 March 2026 00:44:30 +0000 (0:00:00.107) 0:00:35.704 ******* 2026-03-11 00:44:35.424329 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:35.424341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:35.424353 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.424366 | orchestrator | 2026-03-11 00:44:35.424378 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-11 00:44:35.424391 | orchestrator | Wednesday 11 March 2026 00:44:30 +0000 (0:00:00.140) 0:00:35.844 ******* 2026-03-11 00:44:35.424403 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.424416 | orchestrator | 2026-03-11 00:44:35.424428 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-11 00:44:35.424440 | orchestrator | Wednesday 11 March 2026 00:44:30 +0000 (0:00:00.140) 0:00:35.984 ******* 2026-03-11 00:44:35.424449 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:35.424458 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:35.424467 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.424475 | orchestrator | 2026-03-11 00:44:35.424483 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-11 00:44:35.424505 | orchestrator | Wednesday 11 March 2026 00:44:30 +0000 (0:00:00.283) 0:00:36.268 ******* 2026-03-11 00:44:35.424514 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.424579 | orchestrator | 2026-03-11 00:44:35.424598 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-11 00:44:35.424606 | orchestrator | Wednesday 11 March 2026 00:44:31 +0000 (0:00:00.126) 0:00:36.395 ******* 2026-03-11 00:44:35.424626 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:35.424645 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:35.424670 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.424677 | orchestrator | 2026-03-11 00:44:35.424698 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-11 00:44:35.424716 | orchestrator | Wednesday 11 March 2026 00:44:31 +0000 (0:00:00.155) 0:00:36.550 ******* 2026-03-11 00:44:35.424723 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:44:35.424741 | orchestrator | 2026-03-11 00:44:35.424749 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-11 00:44:35.424780 | orchestrator | Wednesday 11 March 2026 00:44:31 +0000 (0:00:00.127) 0:00:36.678 ******* 2026-03-11 00:44:35.424788 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:35.424812 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:35.424823 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.424833 | orchestrator | 2026-03-11 00:44:35.424859 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-11 00:44:35.424866 | orchestrator | Wednesday 11 March 2026 00:44:31 +0000 (0:00:00.136) 0:00:36.815 ******* 2026-03-11 00:44:35.424874 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:35.424891 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:35.424917 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.424929 | orchestrator | 2026-03-11 00:44:35.424958 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-11 00:44:35.425016 | orchestrator | Wednesday 11 March 2026 00:44:31 +0000 (0:00:00.139) 0:00:36.954 ******* 2026-03-11 00:44:35.425027 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:35.425037 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:35.425046 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.425054 | orchestrator | 2026-03-11 00:44:35.425063 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-11 00:44:35.425072 | orchestrator | Wednesday 11 March 2026 00:44:31 +0000 (0:00:00.147) 0:00:37.101 ******* 2026-03-11 00:44:35.425081 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.425089 | orchestrator | 2026-03-11 00:44:35.425098 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-11 00:44:35.425107 | orchestrator | Wednesday 11 March 2026 00:44:31 +0000 (0:00:00.129) 0:00:37.230 ******* 2026-03-11 00:44:35.425115 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.425124 | orchestrator | 2026-03-11 00:44:35.425133 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-11 00:44:35.425141 | orchestrator | Wednesday 11 March 2026 00:44:32 +0000 (0:00:00.149) 0:00:37.380 ******* 2026-03-11 00:44:35.425150 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.425159 | orchestrator | 2026-03-11 00:44:35.425167 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-11 00:44:35.425176 | orchestrator | Wednesday 11 March 2026 00:44:32 +0000 (0:00:00.131) 0:00:37.511 ******* 2026-03-11 00:44:35.425185 | orchestrator | ok: [testbed-node-4] => { 2026-03-11 00:44:35.425194 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-11 00:44:35.425203 | orchestrator | } 2026-03-11 00:44:35.425237 | orchestrator | 2026-03-11 00:44:35.425258 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-11 00:44:35.425267 | orchestrator | Wednesday 11 March 2026 00:44:32 +0000 (0:00:00.135) 0:00:37.647 ******* 2026-03-11 00:44:35.425327 | orchestrator | ok: [testbed-node-4] => { 2026-03-11 00:44:35.425348 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-11 00:44:35.425366 | orchestrator | } 2026-03-11 00:44:35.425375 | orchestrator | 2026-03-11 00:44:35.425384 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-11 00:44:35.425392 | orchestrator | Wednesday 11 March 2026 00:44:32 +0000 (0:00:00.137) 0:00:37.784 ******* 2026-03-11 00:44:35.425410 | orchestrator | ok: [testbed-node-4] => { 2026-03-11 00:44:35.425418 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-11 00:44:35.425427 | orchestrator | } 2026-03-11 00:44:35.425436 | orchestrator | 2026-03-11 00:44:35.425445 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-11 00:44:35.425453 | orchestrator | Wednesday 11 March 2026 00:44:32 +0000 (0:00:00.368) 0:00:38.153 ******* 2026-03-11 00:44:35.425462 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:44:35.425471 | orchestrator | 2026-03-11 00:44:35.425479 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-11 00:44:35.425489 | orchestrator | Wednesday 11 March 2026 00:44:33 +0000 (0:00:00.497) 0:00:38.650 ******* 2026-03-11 00:44:35.425497 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:44:35.425506 | orchestrator | 2026-03-11 00:44:35.425515 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-11 00:44:35.425523 | orchestrator | Wednesday 11 March 2026 00:44:33 +0000 (0:00:00.553) 0:00:39.204 ******* 2026-03-11 00:44:35.425532 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:44:35.425540 | orchestrator | 2026-03-11 00:44:35.425549 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-11 00:44:35.425558 | orchestrator | Wednesday 11 March 2026 00:44:34 +0000 (0:00:00.519) 0:00:39.723 ******* 2026-03-11 00:44:35.425566 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:44:35.425575 | orchestrator | 2026-03-11 00:44:35.425583 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-11 00:44:35.425592 | orchestrator | Wednesday 11 March 2026 00:44:34 +0000 (0:00:00.156) 0:00:39.879 ******* 2026-03-11 00:44:35.425601 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.425609 | orchestrator | 2026-03-11 00:44:35.425627 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-11 00:44:35.425636 | orchestrator | Wednesday 11 March 2026 00:44:34 +0000 (0:00:00.103) 0:00:39.982 ******* 2026-03-11 00:44:35.425645 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.425654 | orchestrator | 2026-03-11 00:44:35.425662 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-11 00:44:35.425671 | orchestrator | Wednesday 11 March 2026 00:44:34 +0000 (0:00:00.114) 0:00:40.097 ******* 2026-03-11 00:44:35.425679 | orchestrator | ok: [testbed-node-4] => { 2026-03-11 00:44:35.425688 | orchestrator |  "vgs_report": { 2026-03-11 00:44:35.425698 | orchestrator |  "vg": [] 2026-03-11 00:44:35.425706 | orchestrator |  } 2026-03-11 00:44:35.425715 | orchestrator | } 2026-03-11 00:44:35.425724 | orchestrator | 2026-03-11 00:44:35.425733 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-11 00:44:35.425742 | orchestrator | Wednesday 11 March 2026 00:44:34 +0000 (0:00:00.144) 0:00:40.241 ******* 2026-03-11 00:44:35.425750 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.425758 | orchestrator | 2026-03-11 00:44:35.425767 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-11 00:44:35.425776 | orchestrator | Wednesday 11 March 2026 00:44:35 +0000 (0:00:00.135) 0:00:40.377 ******* 2026-03-11 00:44:35.425785 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.425793 | orchestrator | 2026-03-11 00:44:35.425802 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-11 00:44:35.425810 | orchestrator | Wednesday 11 March 2026 00:44:35 +0000 (0:00:00.142) 0:00:40.520 ******* 2026-03-11 00:44:35.425819 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.425828 | orchestrator | 2026-03-11 00:44:35.425836 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-11 00:44:35.425845 | orchestrator | Wednesday 11 March 2026 00:44:35 +0000 (0:00:00.143) 0:00:40.663 ******* 2026-03-11 00:44:35.425854 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:35.425862 | orchestrator | 2026-03-11 00:44:35.425878 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-11 00:44:39.662634 | orchestrator | Wednesday 11 March 2026 00:44:35 +0000 (0:00:00.128) 0:00:40.792 ******* 2026-03-11 00:44:39.662768 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.662786 | orchestrator | 2026-03-11 00:44:39.662799 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-11 00:44:39.662811 | orchestrator | Wednesday 11 March 2026 00:44:35 +0000 (0:00:00.363) 0:00:41.156 ******* 2026-03-11 00:44:39.662822 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.662832 | orchestrator | 2026-03-11 00:44:39.662843 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-11 00:44:39.662854 | orchestrator | Wednesday 11 March 2026 00:44:35 +0000 (0:00:00.124) 0:00:41.280 ******* 2026-03-11 00:44:39.662865 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.662876 | orchestrator | 2026-03-11 00:44:39.662887 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-11 00:44:39.663024 | orchestrator | Wednesday 11 March 2026 00:44:36 +0000 (0:00:00.138) 0:00:41.419 ******* 2026-03-11 00:44:39.663047 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.663065 | orchestrator | 2026-03-11 00:44:39.663083 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-11 00:44:39.663102 | orchestrator | Wednesday 11 March 2026 00:44:36 +0000 (0:00:00.119) 0:00:41.539 ******* 2026-03-11 00:44:39.663126 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.663145 | orchestrator | 2026-03-11 00:44:39.663164 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-11 00:44:39.663185 | orchestrator | Wednesday 11 March 2026 00:44:36 +0000 (0:00:00.116) 0:00:41.655 ******* 2026-03-11 00:44:39.663201 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.663213 | orchestrator | 2026-03-11 00:44:39.663226 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-11 00:44:39.663239 | orchestrator | Wednesday 11 March 2026 00:44:36 +0000 (0:00:00.117) 0:00:41.773 ******* 2026-03-11 00:44:39.663251 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.663264 | orchestrator | 2026-03-11 00:44:39.663277 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-11 00:44:39.663290 | orchestrator | Wednesday 11 March 2026 00:44:36 +0000 (0:00:00.127) 0:00:41.900 ******* 2026-03-11 00:44:39.663302 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.663315 | orchestrator | 2026-03-11 00:44:39.663328 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-11 00:44:39.663341 | orchestrator | Wednesday 11 March 2026 00:44:36 +0000 (0:00:00.122) 0:00:42.022 ******* 2026-03-11 00:44:39.663353 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.663365 | orchestrator | 2026-03-11 00:44:39.663377 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-11 00:44:39.663389 | orchestrator | Wednesday 11 March 2026 00:44:36 +0000 (0:00:00.110) 0:00:42.133 ******* 2026-03-11 00:44:39.663402 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.663414 | orchestrator | 2026-03-11 00:44:39.663427 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-11 00:44:39.663457 | orchestrator | Wednesday 11 March 2026 00:44:36 +0000 (0:00:00.121) 0:00:42.255 ******* 2026-03-11 00:44:39.663471 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:39.663486 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:39.663499 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.663512 | orchestrator | 2026-03-11 00:44:39.663523 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-11 00:44:39.663534 | orchestrator | Wednesday 11 March 2026 00:44:37 +0000 (0:00:00.126) 0:00:42.381 ******* 2026-03-11 00:44:39.663545 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:39.663567 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:39.663578 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.663589 | orchestrator | 2026-03-11 00:44:39.663600 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-11 00:44:39.663611 | orchestrator | Wednesday 11 March 2026 00:44:37 +0000 (0:00:00.135) 0:00:42.516 ******* 2026-03-11 00:44:39.663622 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:39.663634 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:39.663645 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.663656 | orchestrator | 2026-03-11 00:44:39.663667 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-11 00:44:39.663678 | orchestrator | Wednesday 11 March 2026 00:44:37 +0000 (0:00:00.258) 0:00:42.774 ******* 2026-03-11 00:44:39.663689 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:39.663700 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:39.663711 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.663722 | orchestrator | 2026-03-11 00:44:39.663753 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-11 00:44:39.663764 | orchestrator | Wednesday 11 March 2026 00:44:37 +0000 (0:00:00.124) 0:00:42.898 ******* 2026-03-11 00:44:39.663775 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:39.663787 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:39.663798 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.663809 | orchestrator | 2026-03-11 00:44:39.663820 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-11 00:44:39.663831 | orchestrator | Wednesday 11 March 2026 00:44:37 +0000 (0:00:00.135) 0:00:43.033 ******* 2026-03-11 00:44:39.663841 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:39.663853 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:39.663864 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.663875 | orchestrator | 2026-03-11 00:44:39.663886 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-11 00:44:39.663920 | orchestrator | Wednesday 11 March 2026 00:44:37 +0000 (0:00:00.139) 0:00:43.173 ******* 2026-03-11 00:44:39.663932 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:39.663943 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:39.663954 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.663965 | orchestrator | 2026-03-11 00:44:39.663976 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-11 00:44:39.663987 | orchestrator | Wednesday 11 March 2026 00:44:37 +0000 (0:00:00.131) 0:00:43.305 ******* 2026-03-11 00:44:39.663997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:39.664016 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:39.664032 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.664044 | orchestrator | 2026-03-11 00:44:39.664055 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-11 00:44:39.664065 | orchestrator | Wednesday 11 March 2026 00:44:38 +0000 (0:00:00.132) 0:00:43.437 ******* 2026-03-11 00:44:39.664076 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:44:39.664087 | orchestrator | 2026-03-11 00:44:39.664098 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-11 00:44:39.664109 | orchestrator | Wednesday 11 March 2026 00:44:38 +0000 (0:00:00.535) 0:00:43.973 ******* 2026-03-11 00:44:39.664120 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:44:39.664131 | orchestrator | 2026-03-11 00:44:39.664141 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-11 00:44:39.664152 | orchestrator | Wednesday 11 March 2026 00:44:39 +0000 (0:00:00.498) 0:00:44.471 ******* 2026-03-11 00:44:39.664163 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:44:39.664174 | orchestrator | 2026-03-11 00:44:39.664184 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-11 00:44:39.664195 | orchestrator | Wednesday 11 March 2026 00:44:39 +0000 (0:00:00.126) 0:00:44.598 ******* 2026-03-11 00:44:39.664206 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'vg_name': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'}) 2026-03-11 00:44:39.664219 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'vg_name': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'}) 2026-03-11 00:44:39.664229 | orchestrator | 2026-03-11 00:44:39.664240 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-11 00:44:39.664251 | orchestrator | Wednesday 11 March 2026 00:44:39 +0000 (0:00:00.144) 0:00:44.743 ******* 2026-03-11 00:44:39.664262 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:39.664273 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:39.664284 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:39.664295 | orchestrator | 2026-03-11 00:44:39.664305 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-11 00:44:39.664316 | orchestrator | Wednesday 11 March 2026 00:44:39 +0000 (0:00:00.144) 0:00:44.888 ******* 2026-03-11 00:44:39.664327 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:39.664345 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:45.311107 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:45.311192 | orchestrator | 2026-03-11 00:44:45.311201 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-11 00:44:45.311207 | orchestrator | Wednesday 11 March 2026 00:44:39 +0000 (0:00:00.144) 0:00:45.032 ******* 2026-03-11 00:44:45.311212 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'})  2026-03-11 00:44:45.311217 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'})  2026-03-11 00:44:45.311221 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:45.311225 | orchestrator | 2026-03-11 00:44:45.311229 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-11 00:44:45.311249 | orchestrator | Wednesday 11 March 2026 00:44:39 +0000 (0:00:00.132) 0:00:45.164 ******* 2026-03-11 00:44:45.311254 | orchestrator | ok: [testbed-node-4] => { 2026-03-11 00:44:45.311258 | orchestrator |  "lvm_report": { 2026-03-11 00:44:45.311263 | orchestrator |  "lv": [ 2026-03-11 00:44:45.311267 | orchestrator |  { 2026-03-11 00:44:45.311272 | orchestrator |  "lv_name": "osd-block-9a64462a-5614-5a25-979d-2f017565a0c4", 2026-03-11 00:44:45.311277 | orchestrator |  "vg_name": "ceph-9a64462a-5614-5a25-979d-2f017565a0c4" 2026-03-11 00:44:45.311281 | orchestrator |  }, 2026-03-11 00:44:45.311284 | orchestrator |  { 2026-03-11 00:44:45.311288 | orchestrator |  "lv_name": "osd-block-9e6773b3-a2d9-5476-8e14-434a68284534", 2026-03-11 00:44:45.311292 | orchestrator |  "vg_name": "ceph-9e6773b3-a2d9-5476-8e14-434a68284534" 2026-03-11 00:44:45.311296 | orchestrator |  } 2026-03-11 00:44:45.311300 | orchestrator |  ], 2026-03-11 00:44:45.311304 | orchestrator |  "pv": [ 2026-03-11 00:44:45.311307 | orchestrator |  { 2026-03-11 00:44:45.311311 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-11 00:44:45.311315 | orchestrator |  "vg_name": "ceph-9a64462a-5614-5a25-979d-2f017565a0c4" 2026-03-11 00:44:45.311319 | orchestrator |  }, 2026-03-11 00:44:45.311323 | orchestrator |  { 2026-03-11 00:44:45.311327 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-11 00:44:45.311330 | orchestrator |  "vg_name": "ceph-9e6773b3-a2d9-5476-8e14-434a68284534" 2026-03-11 00:44:45.311334 | orchestrator |  } 2026-03-11 00:44:45.311338 | orchestrator |  ] 2026-03-11 00:44:45.311342 | orchestrator |  } 2026-03-11 00:44:45.311346 | orchestrator | } 2026-03-11 00:44:45.311350 | orchestrator | 2026-03-11 00:44:45.311354 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-11 00:44:45.311358 | orchestrator | 2026-03-11 00:44:45.311361 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-11 00:44:45.311365 | orchestrator | Wednesday 11 March 2026 00:44:40 +0000 (0:00:00.381) 0:00:45.545 ******* 2026-03-11 00:44:45.311369 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-11 00:44:45.311373 | orchestrator | 2026-03-11 00:44:45.311377 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-11 00:44:45.311381 | orchestrator | Wednesday 11 March 2026 00:44:40 +0000 (0:00:00.234) 0:00:45.780 ******* 2026-03-11 00:44:45.311385 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:45.311389 | orchestrator | 2026-03-11 00:44:45.311393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:45.311397 | orchestrator | Wednesday 11 March 2026 00:44:40 +0000 (0:00:00.214) 0:00:45.995 ******* 2026-03-11 00:44:45.311400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-11 00:44:45.311404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-11 00:44:45.311408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-11 00:44:45.311412 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-11 00:44:45.311416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-11 00:44:45.311419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-11 00:44:45.311423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-11 00:44:45.311427 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-11 00:44:45.311430 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-11 00:44:45.311434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-11 00:44:45.311442 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-11 00:44:45.311445 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-11 00:44:45.311449 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-11 00:44:45.311453 | orchestrator | 2026-03-11 00:44:45.311457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:45.311464 | orchestrator | Wednesday 11 March 2026 00:44:41 +0000 (0:00:00.395) 0:00:46.390 ******* 2026-03-11 00:44:45.311467 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:45.311471 | orchestrator | 2026-03-11 00:44:45.311475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:45.311479 | orchestrator | Wednesday 11 March 2026 00:44:41 +0000 (0:00:00.188) 0:00:46.579 ******* 2026-03-11 00:44:45.311482 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:45.311486 | orchestrator | 2026-03-11 00:44:45.311490 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:45.311504 | orchestrator | Wednesday 11 March 2026 00:44:41 +0000 (0:00:00.200) 0:00:46.779 ******* 2026-03-11 00:44:45.311508 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:45.311512 | orchestrator | 2026-03-11 00:44:45.311516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:45.311519 | orchestrator | Wednesday 11 March 2026 00:44:41 +0000 (0:00:00.186) 0:00:46.965 ******* 2026-03-11 00:44:45.311523 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:45.311527 | orchestrator | 2026-03-11 00:44:45.311530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:45.311565 | orchestrator | Wednesday 11 March 2026 00:44:41 +0000 (0:00:00.164) 0:00:47.130 ******* 2026-03-11 00:44:45.311570 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:45.311576 | orchestrator | 2026-03-11 00:44:45.311583 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:45.311591 | orchestrator | Wednesday 11 March 2026 00:44:42 +0000 (0:00:00.610) 0:00:47.740 ******* 2026-03-11 00:44:45.311599 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:45.311608 | orchestrator | 2026-03-11 00:44:45.311614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:45.311619 | orchestrator | Wednesday 11 March 2026 00:44:42 +0000 (0:00:00.238) 0:00:47.979 ******* 2026-03-11 00:44:45.311625 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:45.311630 | orchestrator | 2026-03-11 00:44:45.311637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:45.311643 | orchestrator | Wednesday 11 March 2026 00:44:42 +0000 (0:00:00.245) 0:00:48.224 ******* 2026-03-11 00:44:45.311649 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:45.311654 | orchestrator | 2026-03-11 00:44:45.311660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:45.311665 | orchestrator | Wednesday 11 March 2026 00:44:43 +0000 (0:00:00.209) 0:00:48.433 ******* 2026-03-11 00:44:45.311671 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7) 2026-03-11 00:44:45.311679 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7) 2026-03-11 00:44:45.311684 | orchestrator | 2026-03-11 00:44:45.311689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:45.311695 | orchestrator | Wednesday 11 March 2026 00:44:43 +0000 (0:00:00.387) 0:00:48.821 ******* 2026-03-11 00:44:45.311701 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cd4ac081-6fbb-4e27-9e74-8104c0078ac5) 2026-03-11 00:44:45.311707 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cd4ac081-6fbb-4e27-9e74-8104c0078ac5) 2026-03-11 00:44:45.311714 | orchestrator | 2026-03-11 00:44:45.311719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:45.311736 | orchestrator | Wednesday 11 March 2026 00:44:43 +0000 (0:00:00.390) 0:00:49.212 ******* 2026-03-11 00:44:45.311742 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3907c798-8bb0-4366-8422-7f195107ce20) 2026-03-11 00:44:45.311748 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3907c798-8bb0-4366-8422-7f195107ce20) 2026-03-11 00:44:45.311755 | orchestrator | 2026-03-11 00:44:45.311761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:45.311767 | orchestrator | Wednesday 11 March 2026 00:44:44 +0000 (0:00:00.390) 0:00:49.602 ******* 2026-03-11 00:44:45.311773 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7c68b4db-9517-4776-878e-5cc78b8cffbb) 2026-03-11 00:44:45.311779 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7c68b4db-9517-4776-878e-5cc78b8cffbb) 2026-03-11 00:44:45.311785 | orchestrator | 2026-03-11 00:44:45.311791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:45.311798 | orchestrator | Wednesday 11 March 2026 00:44:44 +0000 (0:00:00.393) 0:00:49.996 ******* 2026-03-11 00:44:45.311804 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-11 00:44:45.311811 | orchestrator | 2026-03-11 00:44:45.311818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:45.311823 | orchestrator | Wednesday 11 March 2026 00:44:44 +0000 (0:00:00.310) 0:00:50.306 ******* 2026-03-11 00:44:45.311828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-11 00:44:45.311832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-11 00:44:45.311837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-11 00:44:45.311841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-11 00:44:45.311845 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-11 00:44:45.311849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-11 00:44:45.311854 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-11 00:44:45.311858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-11 00:44:45.311862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-11 00:44:45.311866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-11 00:44:45.311870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-11 00:44:45.311879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-11 00:44:53.497477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-11 00:44:53.497574 | orchestrator | 2026-03-11 00:44:53.497588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:53.497599 | orchestrator | Wednesday 11 March 2026 00:44:45 +0000 (0:00:00.368) 0:00:50.675 ******* 2026-03-11 00:44:53.497609 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.497620 | orchestrator | 2026-03-11 00:44:53.497630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:53.497640 | orchestrator | Wednesday 11 March 2026 00:44:45 +0000 (0:00:00.194) 0:00:50.869 ******* 2026-03-11 00:44:53.497650 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.497659 | orchestrator | 2026-03-11 00:44:53.497669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:53.497683 | orchestrator | Wednesday 11 March 2026 00:44:45 +0000 (0:00:00.477) 0:00:51.347 ******* 2026-03-11 00:44:53.497700 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.497745 | orchestrator | 2026-03-11 00:44:53.497763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:53.497779 | orchestrator | Wednesday 11 March 2026 00:44:46 +0000 (0:00:00.184) 0:00:51.532 ******* 2026-03-11 00:44:53.497796 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.497812 | orchestrator | 2026-03-11 00:44:53.497828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:53.497845 | orchestrator | Wednesday 11 March 2026 00:44:46 +0000 (0:00:00.181) 0:00:51.713 ******* 2026-03-11 00:44:53.497861 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.497877 | orchestrator | 2026-03-11 00:44:53.498108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:53.498127 | orchestrator | Wednesday 11 March 2026 00:44:46 +0000 (0:00:00.178) 0:00:51.891 ******* 2026-03-11 00:44:53.498144 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.498161 | orchestrator | 2026-03-11 00:44:53.498178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:53.498193 | orchestrator | Wednesday 11 March 2026 00:44:46 +0000 (0:00:00.180) 0:00:52.072 ******* 2026-03-11 00:44:53.498209 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.498226 | orchestrator | 2026-03-11 00:44:53.498242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:53.498259 | orchestrator | Wednesday 11 March 2026 00:44:46 +0000 (0:00:00.194) 0:00:52.267 ******* 2026-03-11 00:44:53.498275 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.498290 | orchestrator | 2026-03-11 00:44:53.498307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:53.498323 | orchestrator | Wednesday 11 March 2026 00:44:47 +0000 (0:00:00.159) 0:00:52.427 ******* 2026-03-11 00:44:53.498339 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-11 00:44:53.498379 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-11 00:44:53.498396 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-11 00:44:53.498413 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-11 00:44:53.498429 | orchestrator | 2026-03-11 00:44:53.498443 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:53.498454 | orchestrator | Wednesday 11 March 2026 00:44:47 +0000 (0:00:00.577) 0:00:53.004 ******* 2026-03-11 00:44:53.498463 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.498473 | orchestrator | 2026-03-11 00:44:53.498483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:53.498492 | orchestrator | Wednesday 11 March 2026 00:44:47 +0000 (0:00:00.183) 0:00:53.187 ******* 2026-03-11 00:44:53.498502 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.498516 | orchestrator | 2026-03-11 00:44:53.498532 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:53.498548 | orchestrator | Wednesday 11 March 2026 00:44:47 +0000 (0:00:00.186) 0:00:53.374 ******* 2026-03-11 00:44:53.498564 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.498579 | orchestrator | 2026-03-11 00:44:53.498595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:53.498611 | orchestrator | Wednesday 11 March 2026 00:44:48 +0000 (0:00:00.174) 0:00:53.549 ******* 2026-03-11 00:44:53.498626 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.498642 | orchestrator | 2026-03-11 00:44:53.498659 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-11 00:44:53.498675 | orchestrator | Wednesday 11 March 2026 00:44:48 +0000 (0:00:00.179) 0:00:53.728 ******* 2026-03-11 00:44:53.498692 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.498703 | orchestrator | 2026-03-11 00:44:53.498713 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-11 00:44:53.498722 | orchestrator | Wednesday 11 March 2026 00:44:48 +0000 (0:00:00.248) 0:00:53.977 ******* 2026-03-11 00:44:53.498732 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'}}) 2026-03-11 00:44:53.498757 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '12aec0f2-63b1-5667-a447-7095f264ece1'}}) 2026-03-11 00:44:53.498767 | orchestrator | 2026-03-11 00:44:53.498777 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-11 00:44:53.498787 | orchestrator | Wednesday 11 March 2026 00:44:48 +0000 (0:00:00.175) 0:00:54.152 ******* 2026-03-11 00:44:53.498797 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'}) 2026-03-11 00:44:53.498809 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'}) 2026-03-11 00:44:53.498826 | orchestrator | 2026-03-11 00:44:53.498842 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-11 00:44:53.498909 | orchestrator | Wednesday 11 March 2026 00:44:50 +0000 (0:00:01.868) 0:00:56.021 ******* 2026-03-11 00:44:53.498929 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:44:53.498941 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:44:53.498951 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.498961 | orchestrator | 2026-03-11 00:44:53.498971 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-11 00:44:53.498980 | orchestrator | Wednesday 11 March 2026 00:44:50 +0000 (0:00:00.144) 0:00:56.166 ******* 2026-03-11 00:44:53.498991 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'}) 2026-03-11 00:44:53.499061 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'}) 2026-03-11 00:44:53.499072 | orchestrator | 2026-03-11 00:44:53.499082 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-11 00:44:53.499092 | orchestrator | Wednesday 11 March 2026 00:44:52 +0000 (0:00:01.388) 0:00:57.554 ******* 2026-03-11 00:44:53.499101 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:44:53.499111 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:44:53.499121 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.499130 | orchestrator | 2026-03-11 00:44:53.499140 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-11 00:44:53.499150 | orchestrator | Wednesday 11 March 2026 00:44:52 +0000 (0:00:00.124) 0:00:57.678 ******* 2026-03-11 00:44:53.499159 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.499169 | orchestrator | 2026-03-11 00:44:53.499178 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-11 00:44:53.499188 | orchestrator | Wednesday 11 March 2026 00:44:52 +0000 (0:00:00.129) 0:00:57.807 ******* 2026-03-11 00:44:53.499198 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:44:53.499216 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:44:53.499226 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.499236 | orchestrator | 2026-03-11 00:44:53.499245 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-11 00:44:53.499255 | orchestrator | Wednesday 11 March 2026 00:44:52 +0000 (0:00:00.133) 0:00:57.941 ******* 2026-03-11 00:44:53.499276 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.499286 | orchestrator | 2026-03-11 00:44:53.499295 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-11 00:44:53.499305 | orchestrator | Wednesday 11 March 2026 00:44:52 +0000 (0:00:00.130) 0:00:58.071 ******* 2026-03-11 00:44:53.499314 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:44:53.499324 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:44:53.499334 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.499344 | orchestrator | 2026-03-11 00:44:53.499353 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-11 00:44:53.499363 | orchestrator | Wednesday 11 March 2026 00:44:52 +0000 (0:00:00.128) 0:00:58.200 ******* 2026-03-11 00:44:53.499373 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.499382 | orchestrator | 2026-03-11 00:44:53.499392 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-11 00:44:53.499402 | orchestrator | Wednesday 11 March 2026 00:44:52 +0000 (0:00:00.124) 0:00:58.324 ******* 2026-03-11 00:44:53.499411 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:44:53.499421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:44:53.499431 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:53.499441 | orchestrator | 2026-03-11 00:44:53.499458 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-11 00:44:53.499474 | orchestrator | Wednesday 11 March 2026 00:44:53 +0000 (0:00:00.140) 0:00:58.464 ******* 2026-03-11 00:44:53.499491 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:53.499506 | orchestrator | 2026-03-11 00:44:53.499522 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-11 00:44:53.499536 | orchestrator | Wednesday 11 March 2026 00:44:53 +0000 (0:00:00.260) 0:00:58.724 ******* 2026-03-11 00:44:53.499562 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:44:59.315403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:44:59.315527 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.315544 | orchestrator | 2026-03-11 00:44:59.315557 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-11 00:44:59.315570 | orchestrator | Wednesday 11 March 2026 00:44:53 +0000 (0:00:00.144) 0:00:58.869 ******* 2026-03-11 00:44:59.315635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:44:59.315651 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:44:59.315673 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.315685 | orchestrator | 2026-03-11 00:44:59.315696 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-11 00:44:59.315707 | orchestrator | Wednesday 11 March 2026 00:44:53 +0000 (0:00:00.138) 0:00:59.007 ******* 2026-03-11 00:44:59.315718 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:44:59.315729 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:44:59.315768 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.315780 | orchestrator | 2026-03-11 00:44:59.315791 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-11 00:44:59.315802 | orchestrator | Wednesday 11 March 2026 00:44:53 +0000 (0:00:00.130) 0:00:59.138 ******* 2026-03-11 00:44:59.315813 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.315823 | orchestrator | 2026-03-11 00:44:59.315834 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-11 00:44:59.315845 | orchestrator | Wednesday 11 March 2026 00:44:53 +0000 (0:00:00.124) 0:00:59.263 ******* 2026-03-11 00:44:59.315856 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.315867 | orchestrator | 2026-03-11 00:44:59.315877 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-11 00:44:59.315910 | orchestrator | Wednesday 11 March 2026 00:44:54 +0000 (0:00:00.134) 0:00:59.397 ******* 2026-03-11 00:44:59.315921 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.315932 | orchestrator | 2026-03-11 00:44:59.315945 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-11 00:44:59.315958 | orchestrator | Wednesday 11 March 2026 00:44:54 +0000 (0:00:00.128) 0:00:59.526 ******* 2026-03-11 00:44:59.315970 | orchestrator | ok: [testbed-node-5] => { 2026-03-11 00:44:59.315983 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-11 00:44:59.315996 | orchestrator | } 2026-03-11 00:44:59.316008 | orchestrator | 2026-03-11 00:44:59.316021 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-11 00:44:59.316033 | orchestrator | Wednesday 11 March 2026 00:44:54 +0000 (0:00:00.146) 0:00:59.672 ******* 2026-03-11 00:44:59.316046 | orchestrator | ok: [testbed-node-5] => { 2026-03-11 00:44:59.316059 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-11 00:44:59.316072 | orchestrator | } 2026-03-11 00:44:59.316085 | orchestrator | 2026-03-11 00:44:59.316098 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-11 00:44:59.316110 | orchestrator | Wednesday 11 March 2026 00:44:54 +0000 (0:00:00.127) 0:00:59.799 ******* 2026-03-11 00:44:59.316123 | orchestrator | ok: [testbed-node-5] => { 2026-03-11 00:44:59.316136 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-11 00:44:59.316155 | orchestrator | } 2026-03-11 00:44:59.316182 | orchestrator | 2026-03-11 00:44:59.316202 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-11 00:44:59.316220 | orchestrator | Wednesday 11 March 2026 00:44:54 +0000 (0:00:00.151) 0:00:59.951 ******* 2026-03-11 00:44:59.316237 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:59.316256 | orchestrator | 2026-03-11 00:44:59.316274 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-11 00:44:59.316293 | orchestrator | Wednesday 11 March 2026 00:44:55 +0000 (0:00:00.497) 0:01:00.448 ******* 2026-03-11 00:44:59.316311 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:59.316330 | orchestrator | 2026-03-11 00:44:59.316349 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-11 00:44:59.316367 | orchestrator | Wednesday 11 March 2026 00:44:55 +0000 (0:00:00.510) 0:01:00.959 ******* 2026-03-11 00:44:59.316386 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:59.316406 | orchestrator | 2026-03-11 00:44:59.316426 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-11 00:44:59.316437 | orchestrator | Wednesday 11 March 2026 00:44:56 +0000 (0:00:00.686) 0:01:01.646 ******* 2026-03-11 00:44:59.316448 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:59.316459 | orchestrator | 2026-03-11 00:44:59.316470 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-11 00:44:59.316481 | orchestrator | Wednesday 11 March 2026 00:44:56 +0000 (0:00:00.136) 0:01:01.782 ******* 2026-03-11 00:44:59.316491 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.316502 | orchestrator | 2026-03-11 00:44:59.316513 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-11 00:44:59.316536 | orchestrator | Wednesday 11 March 2026 00:44:56 +0000 (0:00:00.102) 0:01:01.884 ******* 2026-03-11 00:44:59.316547 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.316558 | orchestrator | 2026-03-11 00:44:59.316569 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-11 00:44:59.316597 | orchestrator | Wednesday 11 March 2026 00:44:56 +0000 (0:00:00.112) 0:01:01.997 ******* 2026-03-11 00:44:59.316609 | orchestrator | ok: [testbed-node-5] => { 2026-03-11 00:44:59.316620 | orchestrator |  "vgs_report": { 2026-03-11 00:44:59.316632 | orchestrator |  "vg": [] 2026-03-11 00:44:59.316663 | orchestrator |  } 2026-03-11 00:44:59.316675 | orchestrator | } 2026-03-11 00:44:59.316686 | orchestrator | 2026-03-11 00:44:59.316697 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-11 00:44:59.316708 | orchestrator | Wednesday 11 March 2026 00:44:56 +0000 (0:00:00.146) 0:01:02.144 ******* 2026-03-11 00:44:59.316718 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.316729 | orchestrator | 2026-03-11 00:44:59.316740 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-11 00:44:59.316750 | orchestrator | Wednesday 11 March 2026 00:44:56 +0000 (0:00:00.141) 0:01:02.286 ******* 2026-03-11 00:44:59.316761 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.316771 | orchestrator | 2026-03-11 00:44:59.316782 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-11 00:44:59.316793 | orchestrator | Wednesday 11 March 2026 00:44:57 +0000 (0:00:00.112) 0:01:02.399 ******* 2026-03-11 00:44:59.316804 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.316814 | orchestrator | 2026-03-11 00:44:59.316824 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-11 00:44:59.316835 | orchestrator | Wednesday 11 March 2026 00:44:57 +0000 (0:00:00.117) 0:01:02.517 ******* 2026-03-11 00:44:59.316846 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.316856 | orchestrator | 2026-03-11 00:44:59.316867 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-11 00:44:59.316878 | orchestrator | Wednesday 11 March 2026 00:44:57 +0000 (0:00:00.144) 0:01:02.662 ******* 2026-03-11 00:44:59.316979 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.316990 | orchestrator | 2026-03-11 00:44:59.317000 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-11 00:44:59.317011 | orchestrator | Wednesday 11 March 2026 00:44:57 +0000 (0:00:00.120) 0:01:02.782 ******* 2026-03-11 00:44:59.317022 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.317033 | orchestrator | 2026-03-11 00:44:59.317043 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-11 00:44:59.317054 | orchestrator | Wednesday 11 March 2026 00:44:57 +0000 (0:00:00.121) 0:01:02.903 ******* 2026-03-11 00:44:59.317065 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.317076 | orchestrator | 2026-03-11 00:44:59.317087 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-11 00:44:59.317097 | orchestrator | Wednesday 11 March 2026 00:44:57 +0000 (0:00:00.112) 0:01:03.016 ******* 2026-03-11 00:44:59.317108 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.317119 | orchestrator | 2026-03-11 00:44:59.317130 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-11 00:44:59.317140 | orchestrator | Wednesday 11 March 2026 00:44:57 +0000 (0:00:00.355) 0:01:03.371 ******* 2026-03-11 00:44:59.317165 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.317176 | orchestrator | 2026-03-11 00:44:59.317203 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-11 00:44:59.317215 | orchestrator | Wednesday 11 March 2026 00:44:58 +0000 (0:00:00.151) 0:01:03.523 ******* 2026-03-11 00:44:59.317226 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.317236 | orchestrator | 2026-03-11 00:44:59.317247 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-11 00:44:59.317258 | orchestrator | Wednesday 11 March 2026 00:44:58 +0000 (0:00:00.126) 0:01:03.650 ******* 2026-03-11 00:44:59.317277 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.317289 | orchestrator | 2026-03-11 00:44:59.317299 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-11 00:44:59.317310 | orchestrator | Wednesday 11 March 2026 00:44:58 +0000 (0:00:00.135) 0:01:03.785 ******* 2026-03-11 00:44:59.317321 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.317332 | orchestrator | 2026-03-11 00:44:59.317343 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-11 00:44:59.317357 | orchestrator | Wednesday 11 March 2026 00:44:58 +0000 (0:00:00.139) 0:01:03.925 ******* 2026-03-11 00:44:59.317376 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.317390 | orchestrator | 2026-03-11 00:44:59.317401 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-11 00:44:59.317412 | orchestrator | Wednesday 11 March 2026 00:44:58 +0000 (0:00:00.139) 0:01:04.065 ******* 2026-03-11 00:44:59.317423 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.317433 | orchestrator | 2026-03-11 00:44:59.317444 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-11 00:44:59.317455 | orchestrator | Wednesday 11 March 2026 00:44:58 +0000 (0:00:00.137) 0:01:04.203 ******* 2026-03-11 00:44:59.317466 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:44:59.317478 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:44:59.317489 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.317500 | orchestrator | 2026-03-11 00:44:59.317511 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-11 00:44:59.317522 | orchestrator | Wednesday 11 March 2026 00:44:59 +0000 (0:00:00.180) 0:01:04.384 ******* 2026-03-11 00:44:59.317533 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:44:59.317545 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:44:59.317556 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:59.317567 | orchestrator | 2026-03-11 00:44:59.317578 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-11 00:44:59.317588 | orchestrator | Wednesday 11 March 2026 00:44:59 +0000 (0:00:00.148) 0:01:04.533 ******* 2026-03-11 00:44:59.317609 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:45:02.294942 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:45:02.295051 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:45:02.295067 | orchestrator | 2026-03-11 00:45:02.295078 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-11 00:45:02.295090 | orchestrator | Wednesday 11 March 2026 00:44:59 +0000 (0:00:00.153) 0:01:04.686 ******* 2026-03-11 00:45:02.295102 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:45:02.295112 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:45:02.295122 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:45:02.295132 | orchestrator | 2026-03-11 00:45:02.295142 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-11 00:45:02.295152 | orchestrator | Wednesday 11 March 2026 00:44:59 +0000 (0:00:00.166) 0:01:04.852 ******* 2026-03-11 00:45:02.295189 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:45:02.295200 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:45:02.295209 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:45:02.295219 | orchestrator | 2026-03-11 00:45:02.295230 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-11 00:45:02.295241 | orchestrator | Wednesday 11 March 2026 00:44:59 +0000 (0:00:00.161) 0:01:05.014 ******* 2026-03-11 00:45:02.295252 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:45:02.295261 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:45:02.295288 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:45:02.295299 | orchestrator | 2026-03-11 00:45:02.295310 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-11 00:45:02.295321 | orchestrator | Wednesday 11 March 2026 00:45:00 +0000 (0:00:00.361) 0:01:05.376 ******* 2026-03-11 00:45:02.295331 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:45:02.295342 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:45:02.295353 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:45:02.295364 | orchestrator | 2026-03-11 00:45:02.295375 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-11 00:45:02.295385 | orchestrator | Wednesday 11 March 2026 00:45:00 +0000 (0:00:00.153) 0:01:05.529 ******* 2026-03-11 00:45:02.295395 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:45:02.295406 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:45:02.295416 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:45:02.295426 | orchestrator | 2026-03-11 00:45:02.295437 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-11 00:45:02.295448 | orchestrator | Wednesday 11 March 2026 00:45:00 +0000 (0:00:00.152) 0:01:05.681 ******* 2026-03-11 00:45:02.295459 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:45:02.295471 | orchestrator | 2026-03-11 00:45:02.295483 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-11 00:45:02.295494 | orchestrator | Wednesday 11 March 2026 00:45:00 +0000 (0:00:00.532) 0:01:06.214 ******* 2026-03-11 00:45:02.295504 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:45:02.295516 | orchestrator | 2026-03-11 00:45:02.295527 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-11 00:45:02.295538 | orchestrator | Wednesday 11 March 2026 00:45:01 +0000 (0:00:00.544) 0:01:06.758 ******* 2026-03-11 00:45:02.295548 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:45:02.295558 | orchestrator | 2026-03-11 00:45:02.295569 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-11 00:45:02.295579 | orchestrator | Wednesday 11 March 2026 00:45:01 +0000 (0:00:00.147) 0:01:06.906 ******* 2026-03-11 00:45:02.295588 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'vg_name': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'}) 2026-03-11 00:45:02.295600 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'vg_name': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'}) 2026-03-11 00:45:02.295618 | orchestrator | 2026-03-11 00:45:02.295628 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-11 00:45:02.295639 | orchestrator | Wednesday 11 March 2026 00:45:01 +0000 (0:00:00.164) 0:01:07.071 ******* 2026-03-11 00:45:02.295666 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:45:02.295678 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:45:02.295689 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:45:02.295700 | orchestrator | 2026-03-11 00:45:02.295711 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-11 00:45:02.295722 | orchestrator | Wednesday 11 March 2026 00:45:01 +0000 (0:00:00.144) 0:01:07.216 ******* 2026-03-11 00:45:02.295734 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:45:02.295745 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:45:02.295756 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:45:02.295766 | orchestrator | 2026-03-11 00:45:02.295777 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-11 00:45:02.295787 | orchestrator | Wednesday 11 March 2026 00:45:01 +0000 (0:00:00.155) 0:01:07.372 ******* 2026-03-11 00:45:02.295798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'})  2026-03-11 00:45:02.295809 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'})  2026-03-11 00:45:02.295820 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:45:02.295831 | orchestrator | 2026-03-11 00:45:02.295842 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-11 00:45:02.295853 | orchestrator | Wednesday 11 March 2026 00:45:02 +0000 (0:00:00.149) 0:01:07.521 ******* 2026-03-11 00:45:02.295863 | orchestrator | ok: [testbed-node-5] => { 2026-03-11 00:45:02.295874 | orchestrator |  "lvm_report": { 2026-03-11 00:45:02.295908 | orchestrator |  "lv": [ 2026-03-11 00:45:02.295920 | orchestrator |  { 2026-03-11 00:45:02.295930 | orchestrator |  "lv_name": "osd-block-12aec0f2-63b1-5667-a447-7095f264ece1", 2026-03-11 00:45:02.295947 | orchestrator |  "vg_name": "ceph-12aec0f2-63b1-5667-a447-7095f264ece1" 2026-03-11 00:45:02.295959 | orchestrator |  }, 2026-03-11 00:45:02.295969 | orchestrator |  { 2026-03-11 00:45:02.295979 | orchestrator |  "lv_name": "osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9", 2026-03-11 00:45:02.295990 | orchestrator |  "vg_name": "ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9" 2026-03-11 00:45:02.296000 | orchestrator |  } 2026-03-11 00:45:02.296010 | orchestrator |  ], 2026-03-11 00:45:02.296021 | orchestrator |  "pv": [ 2026-03-11 00:45:02.296030 | orchestrator |  { 2026-03-11 00:45:02.296041 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-11 00:45:02.296052 | orchestrator |  "vg_name": "ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9" 2026-03-11 00:45:02.296063 | orchestrator |  }, 2026-03-11 00:45:02.296073 | orchestrator |  { 2026-03-11 00:45:02.296083 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-11 00:45:02.296094 | orchestrator |  "vg_name": "ceph-12aec0f2-63b1-5667-a447-7095f264ece1" 2026-03-11 00:45:02.296104 | orchestrator |  } 2026-03-11 00:45:02.296115 | orchestrator |  ] 2026-03-11 00:45:02.296125 | orchestrator |  } 2026-03-11 00:45:02.296136 | orchestrator | } 2026-03-11 00:45:02.296154 | orchestrator | 2026-03-11 00:45:02.296164 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:45:02.296175 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-11 00:45:02.296185 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-11 00:45:02.296196 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-11 00:45:02.296206 | orchestrator | 2026-03-11 00:45:02.296217 | orchestrator | 2026-03-11 00:45:02.296226 | orchestrator | 2026-03-11 00:45:02.296236 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:45:02.296246 | orchestrator | Wednesday 11 March 2026 00:45:02 +0000 (0:00:00.130) 0:01:07.652 ******* 2026-03-11 00:45:02.296257 | orchestrator | =============================================================================== 2026-03-11 00:45:02.296266 | orchestrator | Create block VGs -------------------------------------------------------- 5.57s 2026-03-11 00:45:02.296277 | orchestrator | Create block LVs -------------------------------------------------------- 4.14s 2026-03-11 00:45:02.296286 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.75s 2026-03-11 00:45:02.296296 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.67s 2026-03-11 00:45:02.296306 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.58s 2026-03-11 00:45:02.296316 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.55s 2026-03-11 00:45:02.296326 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.55s 2026-03-11 00:45:02.296336 | orchestrator | Add known partitions to the list of available block devices ------------- 1.50s 2026-03-11 00:45:02.296355 | orchestrator | Add known links to the list of available block devices ------------------ 1.23s 2026-03-11 00:45:02.663529 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2026-03-11 00:45:02.663642 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2026-03-11 00:45:02.663663 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-03-11 00:45:02.663673 | orchestrator | Print LVM report data --------------------------------------------------- 0.77s 2026-03-11 00:45:02.663681 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.71s 2026-03-11 00:45:02.663689 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-03-11 00:45:02.663698 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-03-11 00:45:02.663705 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.64s 2026-03-11 00:45:02.663713 | orchestrator | Print number of OSDs wanted per DB+WAL VG ------------------------------- 0.64s 2026-03-11 00:45:02.663721 | orchestrator | Get initial list of available block devices ----------------------------- 0.63s 2026-03-11 00:45:02.663729 | orchestrator | Calculate size needed for LVs on ceph_wal_devices ----------------------- 0.62s 2026-03-11 00:45:14.999285 | orchestrator | 2026-03-11 00:45:14 | INFO  | Task 74ce1d6c-f86b-4000-9e52-f33e56e68c53 (facts) was prepared for execution. 2026-03-11 00:45:14.999374 | orchestrator | 2026-03-11 00:45:14 | INFO  | It takes a moment until task 74ce1d6c-f86b-4000-9e52-f33e56e68c53 (facts) has been started and output is visible here. 2026-03-11 00:45:27.066789 | orchestrator | 2026-03-11 00:45:27.066957 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-11 00:45:27.066975 | orchestrator | 2026-03-11 00:45:27.066982 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-11 00:45:27.066988 | orchestrator | Wednesday 11 March 2026 00:45:19 +0000 (0:00:00.262) 0:00:00.262 ******* 2026-03-11 00:45:27.067021 | orchestrator | ok: [testbed-manager] 2026-03-11 00:45:27.067030 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:45:27.067037 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:45:27.067043 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:45:27.067049 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:45:27.067056 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:45:27.067062 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:45:27.067069 | orchestrator | 2026-03-11 00:45:27.067076 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-11 00:45:27.067080 | orchestrator | Wednesday 11 March 2026 00:45:20 +0000 (0:00:01.072) 0:00:01.335 ******* 2026-03-11 00:45:27.067085 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:45:27.067093 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:45:27.067099 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:45:27.067105 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:45:27.067131 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:45:27.067137 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:45:27.067143 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:45:27.067149 | orchestrator | 2026-03-11 00:45:27.067156 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-11 00:45:27.067162 | orchestrator | 2026-03-11 00:45:27.067168 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-11 00:45:27.067175 | orchestrator | Wednesday 11 March 2026 00:45:21 +0000 (0:00:01.199) 0:00:02.534 ******* 2026-03-11 00:45:27.067181 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:45:27.067187 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:45:27.067192 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:45:27.067198 | orchestrator | ok: [testbed-manager] 2026-03-11 00:45:27.067205 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:45:27.067211 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:45:27.067217 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:45:27.067223 | orchestrator | 2026-03-11 00:45:27.067229 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-11 00:45:27.067236 | orchestrator | 2026-03-11 00:45:27.067242 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-11 00:45:27.067249 | orchestrator | Wednesday 11 March 2026 00:45:26 +0000 (0:00:04.729) 0:00:07.264 ******* 2026-03-11 00:45:27.067255 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:45:27.067262 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:45:27.067268 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:45:27.067274 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:45:27.067280 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:45:27.067287 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:45:27.067293 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:45:27.067299 | orchestrator | 2026-03-11 00:45:27.067305 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:45:27.067312 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:45:27.067320 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:45:27.067326 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:45:27.067333 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:45:27.067339 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:45:27.067346 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:45:27.067353 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:45:27.067367 | orchestrator | 2026-03-11 00:45:27.067374 | orchestrator | 2026-03-11 00:45:27.067380 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:45:27.067387 | orchestrator | Wednesday 11 March 2026 00:45:26 +0000 (0:00:00.496) 0:00:07.760 ******* 2026-03-11 00:45:27.067393 | orchestrator | =============================================================================== 2026-03-11 00:45:27.067400 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.73s 2026-03-11 00:45:27.067406 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2026-03-11 00:45:27.067413 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.07s 2026-03-11 00:45:27.067420 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-03-11 00:45:39.377038 | orchestrator | 2026-03-11 00:45:39 | INFO  | Task 3402acc7-79c3-40f7-aa6f-544c50bf04f3 (frr) was prepared for execution. 2026-03-11 00:45:39.377130 | orchestrator | 2026-03-11 00:45:39 | INFO  | It takes a moment until task 3402acc7-79c3-40f7-aa6f-544c50bf04f3 (frr) has been started and output is visible here. 2026-03-11 00:46:02.329527 | orchestrator | 2026-03-11 00:46:02.329654 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-11 00:46:02.329672 | orchestrator | 2026-03-11 00:46:02.329685 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-11 00:46:02.329712 | orchestrator | Wednesday 11 March 2026 00:45:43 +0000 (0:00:00.212) 0:00:00.212 ******* 2026-03-11 00:46:02.329726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-11 00:46:02.329738 | orchestrator | 2026-03-11 00:46:02.329749 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-11 00:46:02.329760 | orchestrator | Wednesday 11 March 2026 00:45:43 +0000 (0:00:00.231) 0:00:00.444 ******* 2026-03-11 00:46:02.329771 | orchestrator | changed: [testbed-manager] 2026-03-11 00:46:02.329783 | orchestrator | 2026-03-11 00:46:02.329794 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-11 00:46:02.329805 | orchestrator | Wednesday 11 March 2026 00:45:44 +0000 (0:00:01.078) 0:00:01.523 ******* 2026-03-11 00:46:02.329821 | orchestrator | changed: [testbed-manager] 2026-03-11 00:46:02.329918 | orchestrator | 2026-03-11 00:46:02.329929 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-11 00:46:02.329941 | orchestrator | Wednesday 11 March 2026 00:45:53 +0000 (0:00:08.454) 0:00:09.977 ******* 2026-03-11 00:46:02.329952 | orchestrator | ok: [testbed-manager] 2026-03-11 00:46:02.329964 | orchestrator | 2026-03-11 00:46:02.330137 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-11 00:46:02.330153 | orchestrator | Wednesday 11 March 2026 00:45:53 +0000 (0:00:00.962) 0:00:10.940 ******* 2026-03-11 00:46:02.330166 | orchestrator | changed: [testbed-manager] 2026-03-11 00:46:02.330178 | orchestrator | 2026-03-11 00:46:02.330192 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-11 00:46:02.330204 | orchestrator | Wednesday 11 March 2026 00:45:54 +0000 (0:00:00.850) 0:00:11.790 ******* 2026-03-11 00:46:02.330217 | orchestrator | ok: [testbed-manager] 2026-03-11 00:46:02.330229 | orchestrator | 2026-03-11 00:46:02.330243 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-11 00:46:02.330256 | orchestrator | Wednesday 11 March 2026 00:45:55 +0000 (0:00:01.098) 0:00:12.888 ******* 2026-03-11 00:46:02.330268 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:46:02.330281 | orchestrator | 2026-03-11 00:46:02.330294 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-11 00:46:02.330307 | orchestrator | Wednesday 11 March 2026 00:45:56 +0000 (0:00:00.118) 0:00:13.007 ******* 2026-03-11 00:46:02.330320 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:46:02.330355 | orchestrator | 2026-03-11 00:46:02.330368 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-11 00:46:02.330382 | orchestrator | Wednesday 11 March 2026 00:45:56 +0000 (0:00:00.133) 0:00:13.141 ******* 2026-03-11 00:46:02.330394 | orchestrator | changed: [testbed-manager] 2026-03-11 00:46:02.330407 | orchestrator | 2026-03-11 00:46:02.330420 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-11 00:46:02.330432 | orchestrator | Wednesday 11 March 2026 00:45:57 +0000 (0:00:00.909) 0:00:14.050 ******* 2026-03-11 00:46:02.330444 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-11 00:46:02.330455 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-11 00:46:02.330467 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-11 00:46:02.330478 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-11 00:46:02.330489 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-11 00:46:02.330500 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-11 00:46:02.330511 | orchestrator | 2026-03-11 00:46:02.330522 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-11 00:46:02.330533 | orchestrator | Wednesday 11 March 2026 00:45:59 +0000 (0:00:01.994) 0:00:16.044 ******* 2026-03-11 00:46:02.330544 | orchestrator | ok: [testbed-manager] 2026-03-11 00:46:02.330555 | orchestrator | 2026-03-11 00:46:02.330566 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-11 00:46:02.330576 | orchestrator | Wednesday 11 March 2026 00:46:00 +0000 (0:00:01.508) 0:00:17.552 ******* 2026-03-11 00:46:02.330587 | orchestrator | changed: [testbed-manager] 2026-03-11 00:46:02.330598 | orchestrator | 2026-03-11 00:46:02.330609 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:46:02.330620 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:46:02.330631 | orchestrator | 2026-03-11 00:46:02.330642 | orchestrator | 2026-03-11 00:46:02.330653 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:46:02.330664 | orchestrator | Wednesday 11 March 2026 00:46:01 +0000 (0:00:01.401) 0:00:18.953 ******* 2026-03-11 00:46:02.330675 | orchestrator | =============================================================================== 2026-03-11 00:46:02.330686 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.45s 2026-03-11 00:46:02.330697 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 1.99s 2026-03-11 00:46:02.330707 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.51s 2026-03-11 00:46:02.330718 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.40s 2026-03-11 00:46:02.330729 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.10s 2026-03-11 00:46:02.330760 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.08s 2026-03-11 00:46:02.330772 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.96s 2026-03-11 00:46:02.330783 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.91s 2026-03-11 00:46:02.330794 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.85s 2026-03-11 00:46:02.330804 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2026-03-11 00:46:02.330815 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.13s 2026-03-11 00:46:02.330848 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.12s 2026-03-11 00:46:02.666169 | orchestrator | 2026-03-11 00:46:02.667981 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Mar 11 00:46:02 UTC 2026 2026-03-11 00:46:02.668045 | orchestrator | 2026-03-11 00:46:04.623690 | orchestrator | 2026-03-11 00:46:04 | INFO  | Collection nutshell is prepared for execution 2026-03-11 00:46:04.623791 | orchestrator | 2026-03-11 00:46:04 | INFO  | A [0] - dotfiles 2026-03-11 00:46:14.712085 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [0] - homer 2026-03-11 00:46:14.712134 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [0] - netdata 2026-03-11 00:46:14.712140 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [0] - openstackclient 2026-03-11 00:46:14.712144 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [0] - phpmyadmin 2026-03-11 00:46:14.712149 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [0] - common 2026-03-11 00:46:14.712938 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [1] -- loadbalancer 2026-03-11 00:46:14.713119 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [2] --- opensearch 2026-03-11 00:46:14.713455 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [2] --- mariadb-ng 2026-03-11 00:46:14.713916 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [3] ---- horizon 2026-03-11 00:46:14.714364 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [3] ---- keystone 2026-03-11 00:46:14.714571 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [4] ----- neutron 2026-03-11 00:46:14.714861 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [5] ------ wait-for-nova 2026-03-11 00:46:14.715255 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [6] ------- octavia 2026-03-11 00:46:14.718723 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [4] ----- barbican 2026-03-11 00:46:14.718768 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [4] ----- designate 2026-03-11 00:46:14.718773 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [4] ----- ironic 2026-03-11 00:46:14.718777 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [4] ----- placement 2026-03-11 00:46:14.718781 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [4] ----- magnum 2026-03-11 00:46:14.718786 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [1] -- openvswitch 2026-03-11 00:46:14.718790 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [2] --- ovn 2026-03-11 00:46:14.718794 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [1] -- memcached 2026-03-11 00:46:14.718798 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [1] -- redis 2026-03-11 00:46:14.718801 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [1] -- rabbitmq-ng 2026-03-11 00:46:14.718805 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [0] - kubernetes 2026-03-11 00:46:14.724493 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [1] -- kubeconfig 2026-03-11 00:46:14.724579 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [1] -- copy-kubeconfig 2026-03-11 00:46:14.724591 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [0] - ceph 2026-03-11 00:46:14.728681 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [1] -- ceph-pools 2026-03-11 00:46:14.728731 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [2] --- copy-ceph-keys 2026-03-11 00:46:14.728737 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [3] ---- cephclient 2026-03-11 00:46:14.731092 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-11 00:46:14.731144 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [4] ----- wait-for-keystone 2026-03-11 00:46:14.731153 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-11 00:46:14.731160 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [5] ------ glance 2026-03-11 00:46:14.731167 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [5] ------ cinder 2026-03-11 00:46:14.731210 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [5] ------ nova 2026-03-11 00:46:14.731217 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [4] ----- prometheus 2026-03-11 00:46:14.732402 | orchestrator | 2026-03-11 00:46:14 | INFO  | A [5] ------ grafana 2026-03-11 00:46:14.913654 | orchestrator | 2026-03-11 00:46:14 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-11 00:46:14.913722 | orchestrator | 2026-03-11 00:46:14 | INFO  | Tasks are running in the background 2026-03-11 00:46:17.770667 | orchestrator | 2026-03-11 00:46:17 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-11 00:46:19.898492 | orchestrator | 2026-03-11 00:46:19 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:46:19.901955 | orchestrator | 2026-03-11 00:46:19 | INFO  | Task cecfb23c-a0fd-47b0-9f8b-0af3853e3b9c is in state STARTED 2026-03-11 00:46:19.905060 | orchestrator | 2026-03-11 00:46:19 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:46:19.905351 | orchestrator | 2026-03-11 00:46:19 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:46:19.906064 | orchestrator | 2026-03-11 00:46:19 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:46:19.906581 | orchestrator | 2026-03-11 00:46:19 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state STARTED 2026-03-11 00:46:19.907992 | orchestrator | 2026-03-11 00:46:19 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:46:19.908021 | orchestrator | 2026-03-11 00:46:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:22.990670 | orchestrator | 2026-03-11 00:46:22 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:46:22.990754 | orchestrator | 2026-03-11 00:46:22 | INFO  | Task cecfb23c-a0fd-47b0-9f8b-0af3853e3b9c is in state STARTED 2026-03-11 00:46:22.990764 | orchestrator | 2026-03-11 00:46:22 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:46:22.990771 | orchestrator | 2026-03-11 00:46:22 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:46:22.990778 | orchestrator | 2026-03-11 00:46:22 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:46:22.990785 | orchestrator | 2026-03-11 00:46:22 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state STARTED 2026-03-11 00:46:22.990792 | orchestrator | 2026-03-11 00:46:22 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:46:22.990799 | orchestrator | 2026-03-11 00:46:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:26.008257 | orchestrator | 2026-03-11 00:46:26 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:46:26.008331 | orchestrator | 2026-03-11 00:46:26 | INFO  | Task cecfb23c-a0fd-47b0-9f8b-0af3853e3b9c is in state STARTED 2026-03-11 00:46:26.009006 | orchestrator | 2026-03-11 00:46:26 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:46:26.009419 | orchestrator | 2026-03-11 00:46:26 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:46:26.010598 | orchestrator | 2026-03-11 00:46:26 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:46:26.011490 | orchestrator | 2026-03-11 00:46:26 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state STARTED 2026-03-11 00:46:26.013987 | orchestrator | 2026-03-11 00:46:26 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:46:26.014097 | orchestrator | 2026-03-11 00:46:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:29.076249 | orchestrator | 2026-03-11 00:46:29 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:46:29.076299 | orchestrator | 2026-03-11 00:46:29 | INFO  | Task cecfb23c-a0fd-47b0-9f8b-0af3853e3b9c is in state STARTED 2026-03-11 00:46:29.077711 | orchestrator | 2026-03-11 00:46:29 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:46:29.077918 | orchestrator | 2026-03-11 00:46:29 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:46:29.078405 | orchestrator | 2026-03-11 00:46:29 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:46:29.079869 | orchestrator | 2026-03-11 00:46:29 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state STARTED 2026-03-11 00:46:29.080294 | orchestrator | 2026-03-11 00:46:29 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:46:29.080319 | orchestrator | 2026-03-11 00:46:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:32.353056 | orchestrator | 2026-03-11 00:46:32 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:46:32.353154 | orchestrator | 2026-03-11 00:46:32 | INFO  | Task cecfb23c-a0fd-47b0-9f8b-0af3853e3b9c is in state STARTED 2026-03-11 00:46:32.353166 | orchestrator | 2026-03-11 00:46:32 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:46:32.353175 | orchestrator | 2026-03-11 00:46:32 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:46:32.353184 | orchestrator | 2026-03-11 00:46:32 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:46:32.353281 | orchestrator | 2026-03-11 00:46:32 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state STARTED 2026-03-11 00:46:32.353296 | orchestrator | 2026-03-11 00:46:32 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:46:32.353311 | orchestrator | 2026-03-11 00:46:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:35.750848 | orchestrator | 2026-03-11 00:46:35 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:46:35.750956 | orchestrator | 2026-03-11 00:46:35 | INFO  | Task cecfb23c-a0fd-47b0-9f8b-0af3853e3b9c is in state STARTED 2026-03-11 00:46:35.750964 | orchestrator | 2026-03-11 00:46:35 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:46:35.750968 | orchestrator | 2026-03-11 00:46:35 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:46:35.750973 | orchestrator | 2026-03-11 00:46:35 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:46:35.751491 | orchestrator | 2026-03-11 00:46:35 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state STARTED 2026-03-11 00:46:35.751992 | orchestrator | 2026-03-11 00:46:35 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:46:35.752044 | orchestrator | 2026-03-11 00:46:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:38.871479 | orchestrator | 2026-03-11 00:46:38 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:46:38.871572 | orchestrator | 2026-03-11 00:46:38 | INFO  | Task cecfb23c-a0fd-47b0-9f8b-0af3853e3b9c is in state STARTED 2026-03-11 00:46:38.872141 | orchestrator | 2026-03-11 00:46:38 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:46:38.873499 | orchestrator | 2026-03-11 00:46:38 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:46:38.873864 | orchestrator | 2026-03-11 00:46:38 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:46:38.874359 | orchestrator | 2026-03-11 00:46:38 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state STARTED 2026-03-11 00:46:38.874918 | orchestrator | 2026-03-11 00:46:38 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:46:38.874947 | orchestrator | 2026-03-11 00:46:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:41.948993 | orchestrator | 2026-03-11 00:46:41.949073 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-11 00:46:41.949083 | orchestrator | 2026-03-11 00:46:41.949091 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-11 00:46:41.949096 | orchestrator | Wednesday 11 March 2026 00:46:26 +0000 (0:00:00.555) 0:00:00.555 ******* 2026-03-11 00:46:41.949100 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:46:41.949105 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:46:41.949110 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:46:41.949114 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:46:41.949118 | orchestrator | changed: [testbed-manager] 2026-03-11 00:46:41.949122 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:46:41.949126 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:46:41.949130 | orchestrator | 2026-03-11 00:46:41.949134 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-11 00:46:41.949138 | orchestrator | Wednesday 11 March 2026 00:46:30 +0000 (0:00:04.546) 0:00:05.102 ******* 2026-03-11 00:46:41.949143 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-11 00:46:41.949147 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-11 00:46:41.949151 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-11 00:46:41.949155 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-11 00:46:41.949159 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-11 00:46:41.949163 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-11 00:46:41.949167 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-11 00:46:41.949171 | orchestrator | 2026-03-11 00:46:41.949175 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-11 00:46:41.949180 | orchestrator | Wednesday 11 March 2026 00:46:32 +0000 (0:00:01.390) 0:00:06.492 ******* 2026-03-11 00:46:41.949186 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-11 00:46:31.794278', 'end': '2026-03-11 00:46:31.801311', 'delta': '0:00:00.007033', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-11 00:46:41.949198 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-11 00:46:31.835042', 'end': '2026-03-11 00:46:31.841036', 'delta': '0:00:00.005994', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-11 00:46:41.949218 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-11 00:46:31.854534', 'end': '2026-03-11 00:46:31.861153', 'delta': '0:00:00.006619', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-11 00:46:41.949239 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-11 00:46:31.844349', 'end': '2026-03-11 00:46:31.851976', 'delta': '0:00:00.007627', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-11 00:46:41.949244 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-11 00:46:31.909262', 'end': '2026-03-11 00:46:31.914993', 'delta': '0:00:00.005731', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-11 00:46:41.949426 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-11 00:46:31.979167', 'end': '2026-03-11 00:46:31.984923', 'delta': '0:00:00.005756', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-11 00:46:41.949435 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-11 00:46:32.068450', 'end': '2026-03-11 00:46:32.075187', 'delta': '0:00:00.006737', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-11 00:46:41.949448 | orchestrator | 2026-03-11 00:46:41.949452 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-11 00:46:41.949457 | orchestrator | Wednesday 11 March 2026 00:46:35 +0000 (0:00:03.278) 0:00:09.771 ******* 2026-03-11 00:46:41.949461 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-11 00:46:41.949464 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-11 00:46:41.949468 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-11 00:46:41.949472 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-11 00:46:41.949476 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-11 00:46:41.949479 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-11 00:46:41.949483 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-11 00:46:41.949487 | orchestrator | 2026-03-11 00:46:41.949490 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-11 00:46:41.949494 | orchestrator | Wednesday 11 March 2026 00:46:36 +0000 (0:00:01.342) 0:00:11.114 ******* 2026-03-11 00:46:41.949498 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-11 00:46:41.949504 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-11 00:46:41.949508 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-11 00:46:41.949512 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-11 00:46:41.949516 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-11 00:46:41.949519 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-11 00:46:41.949523 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-11 00:46:41.949527 | orchestrator | 2026-03-11 00:46:41.949531 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:46:41.949540 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:46:41.949546 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:46:41.949550 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:46:41.949555 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:46:41.949559 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:46:41.949566 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:46:41.949572 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:46:41.949577 | orchestrator | 2026-03-11 00:46:41.949583 | orchestrator | 2026-03-11 00:46:41.949592 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:46:41.949599 | orchestrator | Wednesday 11 March 2026 00:46:39 +0000 (0:00:02.222) 0:00:13.337 ******* 2026-03-11 00:46:41.949607 | orchestrator | =============================================================================== 2026-03-11 00:46:41.949613 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.55s 2026-03-11 00:46:41.949625 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.28s 2026-03-11 00:46:41.949633 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.22s 2026-03-11 00:46:41.949639 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.39s 2026-03-11 00:46:41.949645 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.34s 2026-03-11 00:46:41.949651 | orchestrator | 2026-03-11 00:46:41 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:46:41.949657 | orchestrator | 2026-03-11 00:46:41 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:46:41.949662 | orchestrator | 2026-03-11 00:46:41 | INFO  | Task cecfb23c-a0fd-47b0-9f8b-0af3853e3b9c is in state SUCCESS 2026-03-11 00:46:41.949668 | orchestrator | 2026-03-11 00:46:41 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:46:41.949674 | orchestrator | 2026-03-11 00:46:41 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:46:41.949680 | orchestrator | 2026-03-11 00:46:41 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:46:41.949687 | orchestrator | 2026-03-11 00:46:41 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state STARTED 2026-03-11 00:46:41.949693 | orchestrator | 2026-03-11 00:46:41 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:46:41.949699 | orchestrator | 2026-03-11 00:46:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:45.072591 | orchestrator | 2026-03-11 00:46:45 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:46:45.072673 | orchestrator | 2026-03-11 00:46:45 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:46:45.074910 | orchestrator | 2026-03-11 00:46:45 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:46:45.074989 | orchestrator | 2026-03-11 00:46:45 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:46:45.075827 | orchestrator | 2026-03-11 00:46:45 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:46:45.077440 | orchestrator | 2026-03-11 00:46:45 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state STARTED 2026-03-11 00:46:45.077978 | orchestrator | 2026-03-11 00:46:45 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:46:45.078071 | orchestrator | 2026-03-11 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:48.112043 | orchestrator | 2026-03-11 00:46:48 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:46:48.112096 | orchestrator | 2026-03-11 00:46:48 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:46:48.113674 | orchestrator | 2026-03-11 00:46:48 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:46:48.116048 | orchestrator | 2026-03-11 00:46:48 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:46:48.117027 | orchestrator | 2026-03-11 00:46:48 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:46:48.118466 | orchestrator | 2026-03-11 00:46:48 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state STARTED 2026-03-11 00:46:48.119485 | orchestrator | 2026-03-11 00:46:48 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:46:48.119548 | orchestrator | 2026-03-11 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:51.151867 | orchestrator | 2026-03-11 00:46:51 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:46:51.153497 | orchestrator | 2026-03-11 00:46:51 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:46:51.154601 | orchestrator | 2026-03-11 00:46:51 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:46:51.155557 | orchestrator | 2026-03-11 00:46:51 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:46:51.155988 | orchestrator | 2026-03-11 00:46:51 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:46:51.161203 | orchestrator | 2026-03-11 00:46:51 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state STARTED 2026-03-11 00:46:51.161334 | orchestrator | 2026-03-11 00:46:51 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:46:51.161354 | orchestrator | 2026-03-11 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:54.207274 | orchestrator | 2026-03-11 00:46:54 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:46:54.207418 | orchestrator | 2026-03-11 00:46:54 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:46:54.208493 | orchestrator | 2026-03-11 00:46:54 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:46:54.209661 | orchestrator | 2026-03-11 00:46:54 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:46:54.212446 | orchestrator | 2026-03-11 00:46:54 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:46:54.215169 | orchestrator | 2026-03-11 00:46:54 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state STARTED 2026-03-11 00:46:54.215232 | orchestrator | 2026-03-11 00:46:54 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:46:54.215244 | orchestrator | 2026-03-11 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:57.302165 | orchestrator | 2026-03-11 00:46:57 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:46:57.303582 | orchestrator | 2026-03-11 00:46:57 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:46:57.305260 | orchestrator | 2026-03-11 00:46:57 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:46:57.307177 | orchestrator | 2026-03-11 00:46:57 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:46:57.308825 | orchestrator | 2026-03-11 00:46:57 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:46:57.310506 | orchestrator | 2026-03-11 00:46:57 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state STARTED 2026-03-11 00:46:57.311703 | orchestrator | 2026-03-11 00:46:57 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:46:57.311742 | orchestrator | 2026-03-11 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:00.508024 | orchestrator | 2026-03-11 00:47:00 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:00.508089 | orchestrator | 2026-03-11 00:47:00 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:00.508109 | orchestrator | 2026-03-11 00:47:00 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:00.508116 | orchestrator | 2026-03-11 00:47:00 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:00.508137 | orchestrator | 2026-03-11 00:47:00 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:47:00.508144 | orchestrator | 2026-03-11 00:47:00 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state STARTED 2026-03-11 00:47:00.508149 | orchestrator | 2026-03-11 00:47:00 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:00.508155 | orchestrator | 2026-03-11 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:03.492626 | orchestrator | 2026-03-11 00:47:03 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:03.492677 | orchestrator | 2026-03-11 00:47:03 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:03.492682 | orchestrator | 2026-03-11 00:47:03 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:03.492686 | orchestrator | 2026-03-11 00:47:03 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:03.492689 | orchestrator | 2026-03-11 00:47:03 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:47:03.492692 | orchestrator | 2026-03-11 00:47:03 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state STARTED 2026-03-11 00:47:03.492696 | orchestrator | 2026-03-11 00:47:03 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:03.492699 | orchestrator | 2026-03-11 00:47:03 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:06.530853 | orchestrator | 2026-03-11 00:47:06 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:06.532211 | orchestrator | 2026-03-11 00:47:06 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:06.536682 | orchestrator | 2026-03-11 00:47:06 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:06.555989 | orchestrator | 2026-03-11 00:47:06 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:06.580220 | orchestrator | 2026-03-11 00:47:06 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:47:06.580269 | orchestrator | 2026-03-11 00:47:06 | INFO  | Task 84830e67-2043-4d81-a069-b3f1c8cc07ef is in state SUCCESS 2026-03-11 00:47:06.580275 | orchestrator | 2026-03-11 00:47:06 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:06.580282 | orchestrator | 2026-03-11 00:47:06 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:09.601720 | orchestrator | 2026-03-11 00:47:09 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:09.601857 | orchestrator | 2026-03-11 00:47:09 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:09.601871 | orchestrator | 2026-03-11 00:47:09 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:09.604645 | orchestrator | 2026-03-11 00:47:09 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:09.604696 | orchestrator | 2026-03-11 00:47:09 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:47:09.604710 | orchestrator | 2026-03-11 00:47:09 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:09.604723 | orchestrator | 2026-03-11 00:47:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:12.625977 | orchestrator | 2026-03-11 00:47:12 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:12.626452 | orchestrator | 2026-03-11 00:47:12 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:12.638523 | orchestrator | 2026-03-11 00:47:12 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:12.642185 | orchestrator | 2026-03-11 00:47:12 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:12.643505 | orchestrator | 2026-03-11 00:47:12 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state STARTED 2026-03-11 00:47:12.644690 | orchestrator | 2026-03-11 00:47:12 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:12.644745 | orchestrator | 2026-03-11 00:47:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:15.668990 | orchestrator | 2026-03-11 00:47:15 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:15.670737 | orchestrator | 2026-03-11 00:47:15 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:15.673707 | orchestrator | 2026-03-11 00:47:15 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:15.675155 | orchestrator | 2026-03-11 00:47:15 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:15.675887 | orchestrator | 2026-03-11 00:47:15 | INFO  | Task 9b456c0c-3835-426b-b1eb-ab6ed68afb33 is in state SUCCESS 2026-03-11 00:47:15.677090 | orchestrator | 2026-03-11 00:47:15 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:15.677494 | orchestrator | 2026-03-11 00:47:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:18.703626 | orchestrator | 2026-03-11 00:47:18 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:18.704705 | orchestrator | 2026-03-11 00:47:18 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:18.706493 | orchestrator | 2026-03-11 00:47:18 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:18.709300 | orchestrator | 2026-03-11 00:47:18 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:18.715637 | orchestrator | 2026-03-11 00:47:18 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:18.715702 | orchestrator | 2026-03-11 00:47:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:21.758186 | orchestrator | 2026-03-11 00:47:21 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:21.765042 | orchestrator | 2026-03-11 00:47:21 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:21.766082 | orchestrator | 2026-03-11 00:47:21 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:21.767594 | orchestrator | 2026-03-11 00:47:21 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:21.769519 | orchestrator | 2026-03-11 00:47:21 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:21.769564 | orchestrator | 2026-03-11 00:47:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:24.809345 | orchestrator | 2026-03-11 00:47:24 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:24.810187 | orchestrator | 2026-03-11 00:47:24 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:24.810826 | orchestrator | 2026-03-11 00:47:24 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:24.812046 | orchestrator | 2026-03-11 00:47:24 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:24.813252 | orchestrator | 2026-03-11 00:47:24 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:24.813291 | orchestrator | 2026-03-11 00:47:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:27.890703 | orchestrator | 2026-03-11 00:47:27 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:27.895552 | orchestrator | 2026-03-11 00:47:27 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:27.899690 | orchestrator | 2026-03-11 00:47:27 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:27.902301 | orchestrator | 2026-03-11 00:47:27 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:27.904039 | orchestrator | 2026-03-11 00:47:27 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:27.904086 | orchestrator | 2026-03-11 00:47:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:30.989250 | orchestrator | 2026-03-11 00:47:30 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:30.989295 | orchestrator | 2026-03-11 00:47:30 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:30.991502 | orchestrator | 2026-03-11 00:47:30 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:30.993186 | orchestrator | 2026-03-11 00:47:30 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:30.994491 | orchestrator | 2026-03-11 00:47:30 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:30.994525 | orchestrator | 2026-03-11 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:34.110816 | orchestrator | 2026-03-11 00:47:34 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:34.112998 | orchestrator | 2026-03-11 00:47:34 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:34.114929 | orchestrator | 2026-03-11 00:47:34 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:34.117806 | orchestrator | 2026-03-11 00:47:34 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:34.123779 | orchestrator | 2026-03-11 00:47:34 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:34.123848 | orchestrator | 2026-03-11 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:37.179556 | orchestrator | 2026-03-11 00:47:37 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:37.179619 | orchestrator | 2026-03-11 00:47:37 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:37.180864 | orchestrator | 2026-03-11 00:47:37 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:37.183067 | orchestrator | 2026-03-11 00:47:37 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:37.184896 | orchestrator | 2026-03-11 00:47:37 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:37.184949 | orchestrator | 2026-03-11 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:40.225328 | orchestrator | 2026-03-11 00:47:40 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:40.225955 | orchestrator | 2026-03-11 00:47:40 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:40.226006 | orchestrator | 2026-03-11 00:47:40 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:40.227158 | orchestrator | 2026-03-11 00:47:40 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:40.227185 | orchestrator | 2026-03-11 00:47:40 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:40.227191 | orchestrator | 2026-03-11 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:43.272331 | orchestrator | 2026-03-11 00:47:43 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:43.277211 | orchestrator | 2026-03-11 00:47:43 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:43.280989 | orchestrator | 2026-03-11 00:47:43 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:43.282529 | orchestrator | 2026-03-11 00:47:43 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:43.283412 | orchestrator | 2026-03-11 00:47:43 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:43.283446 | orchestrator | 2026-03-11 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:46.319154 | orchestrator | 2026-03-11 00:47:46 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state STARTED 2026-03-11 00:47:46.319211 | orchestrator | 2026-03-11 00:47:46 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:46.319219 | orchestrator | 2026-03-11 00:47:46 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:46.319225 | orchestrator | 2026-03-11 00:47:46 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:46.319826 | orchestrator | 2026-03-11 00:47:46 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:46.319849 | orchestrator | 2026-03-11 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:49.349846 | orchestrator | 2026-03-11 00:47:49.349904 | orchestrator | 2026-03-11 00:47:49.349913 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-11 00:47:49.349919 | orchestrator | 2026-03-11 00:47:49.349924 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-11 00:47:49.349932 | orchestrator | Wednesday 11 March 2026 00:46:25 +0000 (0:00:00.695) 0:00:00.695 ******* 2026-03-11 00:47:49.349939 | orchestrator | ok: [testbed-manager] => { 2026-03-11 00:47:49.349946 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-11 00:47:49.349952 | orchestrator | } 2026-03-11 00:47:49.349958 | orchestrator | 2026-03-11 00:47:49.349962 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-11 00:47:49.349967 | orchestrator | Wednesday 11 March 2026 00:46:26 +0000 (0:00:00.322) 0:00:01.018 ******* 2026-03-11 00:47:49.349972 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:49.349978 | orchestrator | 2026-03-11 00:47:49.349983 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-11 00:47:49.349988 | orchestrator | Wednesday 11 March 2026 00:46:27 +0000 (0:00:01.519) 0:00:02.538 ******* 2026-03-11 00:47:49.349993 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-11 00:47:49.349997 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-11 00:47:49.350000 | orchestrator | 2026-03-11 00:47:49.350049 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-11 00:47:49.350056 | orchestrator | Wednesday 11 March 2026 00:46:30 +0000 (0:00:02.710) 0:00:05.248 ******* 2026-03-11 00:47:49.350061 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:49.350080 | orchestrator | 2026-03-11 00:47:49.350086 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-11 00:47:49.350092 | orchestrator | Wednesday 11 March 2026 00:46:33 +0000 (0:00:02.900) 0:00:08.149 ******* 2026-03-11 00:47:49.350095 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:49.350098 | orchestrator | 2026-03-11 00:47:49.350101 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-11 00:47:49.350105 | orchestrator | Wednesday 11 March 2026 00:46:35 +0000 (0:00:01.728) 0:00:09.877 ******* 2026-03-11 00:47:49.350108 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-11 00:47:49.350111 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:49.350114 | orchestrator | 2026-03-11 00:47:49.350117 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-11 00:47:49.350128 | orchestrator | Wednesday 11 March 2026 00:46:59 +0000 (0:00:24.796) 0:00:34.674 ******* 2026-03-11 00:47:49.350135 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:49.350139 | orchestrator | 2026-03-11 00:47:49.350142 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:47:49.350149 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:47:49.350158 | orchestrator | 2026-03-11 00:47:49.350163 | orchestrator | 2026-03-11 00:47:49.350168 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:47:49.350173 | orchestrator | Wednesday 11 March 2026 00:47:04 +0000 (0:00:04.144) 0:00:38.818 ******* 2026-03-11 00:47:49.350177 | orchestrator | =============================================================================== 2026-03-11 00:47:49.350183 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.80s 2026-03-11 00:47:49.350188 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.14s 2026-03-11 00:47:49.350193 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.90s 2026-03-11 00:47:49.350198 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.71s 2026-03-11 00:47:49.350203 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.73s 2026-03-11 00:47:49.350208 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.52s 2026-03-11 00:47:49.350214 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.32s 2026-03-11 00:47:49.350219 | orchestrator | 2026-03-11 00:47:49.350224 | orchestrator | 2026-03-11 00:47:49.350230 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-11 00:47:49.350235 | orchestrator | 2026-03-11 00:47:49.350241 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-11 00:47:49.350246 | orchestrator | Wednesday 11 March 2026 00:46:27 +0000 (0:00:01.013) 0:00:01.013 ******* 2026-03-11 00:47:49.350252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-11 00:47:49.350258 | orchestrator | 2026-03-11 00:47:49.350263 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-11 00:47:49.350266 | orchestrator | Wednesday 11 March 2026 00:46:28 +0000 (0:00:00.598) 0:00:01.612 ******* 2026-03-11 00:47:49.350269 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-11 00:47:49.350272 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-11 00:47:49.350276 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-11 00:47:49.350279 | orchestrator | 2026-03-11 00:47:49.350282 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-11 00:47:49.350297 | orchestrator | Wednesday 11 March 2026 00:46:30 +0000 (0:00:02.870) 0:00:04.482 ******* 2026-03-11 00:47:49.350351 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:49.350366 | orchestrator | 2026-03-11 00:47:49.350371 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-11 00:47:49.350379 | orchestrator | Wednesday 11 March 2026 00:46:33 +0000 (0:00:02.896) 0:00:07.379 ******* 2026-03-11 00:47:49.350396 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-11 00:47:49.350402 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:49.350407 | orchestrator | 2026-03-11 00:47:49.350412 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-11 00:47:49.350417 | orchestrator | Wednesday 11 March 2026 00:47:07 +0000 (0:00:33.401) 0:00:40.780 ******* 2026-03-11 00:47:49.350422 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:49.350428 | orchestrator | 2026-03-11 00:47:49.350433 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-11 00:47:49.350444 | orchestrator | Wednesday 11 March 2026 00:47:09 +0000 (0:00:01.908) 0:00:42.689 ******* 2026-03-11 00:47:49.350448 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:49.350452 | orchestrator | 2026-03-11 00:47:49.350455 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-11 00:47:49.350459 | orchestrator | Wednesday 11 March 2026 00:47:10 +0000 (0:00:00.978) 0:00:43.667 ******* 2026-03-11 00:47:49.350463 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:49.350466 | orchestrator | 2026-03-11 00:47:49.350470 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-11 00:47:49.350473 | orchestrator | Wednesday 11 March 2026 00:47:12 +0000 (0:00:02.133) 0:00:45.801 ******* 2026-03-11 00:47:49.350477 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:49.350480 | orchestrator | 2026-03-11 00:47:49.350484 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-11 00:47:49.350487 | orchestrator | Wednesday 11 March 2026 00:47:12 +0000 (0:00:00.661) 0:00:46.462 ******* 2026-03-11 00:47:49.350491 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:49.350496 | orchestrator | 2026-03-11 00:47:49.350502 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-11 00:47:49.350507 | orchestrator | Wednesday 11 March 2026 00:47:13 +0000 (0:00:00.553) 0:00:47.016 ******* 2026-03-11 00:47:49.350513 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:49.350518 | orchestrator | 2026-03-11 00:47:49.350524 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:47:49.350529 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:47:49.350535 | orchestrator | 2026-03-11 00:47:49.350543 | orchestrator | 2026-03-11 00:47:49.350549 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:47:49.350554 | orchestrator | Wednesday 11 March 2026 00:47:13 +0000 (0:00:00.453) 0:00:47.469 ******* 2026-03-11 00:47:49.350559 | orchestrator | =============================================================================== 2026-03-11 00:47:49.350565 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.40s 2026-03-11 00:47:49.350569 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.90s 2026-03-11 00:47:49.350575 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.87s 2026-03-11 00:47:49.350580 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.13s 2026-03-11 00:47:49.350585 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.91s 2026-03-11 00:47:49.350590 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.98s 2026-03-11 00:47:49.350595 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.66s 2026-03-11 00:47:49.350600 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.60s 2026-03-11 00:47:49.350605 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.55s 2026-03-11 00:47:49.350610 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.45s 2026-03-11 00:47:49.350621 | orchestrator | 2026-03-11 00:47:49.350626 | orchestrator | 2026-03-11 00:47:49.350631 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-11 00:47:49.350636 | orchestrator | 2026-03-11 00:47:49.350642 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-11 00:47:49.350648 | orchestrator | Wednesday 11 March 2026 00:46:43 +0000 (0:00:00.259) 0:00:00.259 ******* 2026-03-11 00:47:49.350653 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:49.350658 | orchestrator | 2026-03-11 00:47:49.350664 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-11 00:47:49.350669 | orchestrator | Wednesday 11 March 2026 00:46:45 +0000 (0:00:01.892) 0:00:02.152 ******* 2026-03-11 00:47:49.350674 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-11 00:47:49.350680 | orchestrator | 2026-03-11 00:47:49.350685 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-11 00:47:49.350692 | orchestrator | Wednesday 11 March 2026 00:46:46 +0000 (0:00:00.594) 0:00:02.746 ******* 2026-03-11 00:47:49.350697 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:49.350702 | orchestrator | 2026-03-11 00:47:49.350708 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-11 00:47:49.350711 | orchestrator | Wednesday 11 March 2026 00:46:47 +0000 (0:00:01.272) 0:00:04.018 ******* 2026-03-11 00:47:49.350800 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-11 00:47:49.350805 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:49.350808 | orchestrator | 2026-03-11 00:47:49.350812 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-11 00:47:49.350815 | orchestrator | Wednesday 11 March 2026 00:47:41 +0000 (0:00:53.989) 0:00:58.008 ******* 2026-03-11 00:47:49.350818 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:49.350822 | orchestrator | 2026-03-11 00:47:49.350825 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:47:49.350828 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:47:49.350832 | orchestrator | 2026-03-11 00:47:49.350835 | orchestrator | 2026-03-11 00:47:49.350838 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:47:49.350847 | orchestrator | Wednesday 11 March 2026 00:47:45 +0000 (0:00:04.165) 0:01:02.174 ******* 2026-03-11 00:47:49.350851 | orchestrator | =============================================================================== 2026-03-11 00:47:49.350854 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 53.99s 2026-03-11 00:47:49.350857 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.17s 2026-03-11 00:47:49.350864 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.89s 2026-03-11 00:47:49.350868 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.27s 2026-03-11 00:47:49.350871 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.59s 2026-03-11 00:47:49.350874 | orchestrator | 2026-03-11 00:47:49 | INFO  | Task ea50e200-b927-4e17-a3b0-bdb1ce6ba657 is in state SUCCESS 2026-03-11 00:47:49.351322 | orchestrator | 2026-03-11 00:47:49 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:49.351879 | orchestrator | 2026-03-11 00:47:49 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:49.352853 | orchestrator | 2026-03-11 00:47:49 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:49.353817 | orchestrator | 2026-03-11 00:47:49 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:49.353843 | orchestrator | 2026-03-11 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:52.389962 | orchestrator | 2026-03-11 00:47:52 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:52.391705 | orchestrator | 2026-03-11 00:47:52 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:52.392250 | orchestrator | 2026-03-11 00:47:52 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:52.393366 | orchestrator | 2026-03-11 00:47:52 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:52.393455 | orchestrator | 2026-03-11 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:55.443754 | orchestrator | 2026-03-11 00:47:55 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:55.447585 | orchestrator | 2026-03-11 00:47:55 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:55.451653 | orchestrator | 2026-03-11 00:47:55 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:55.453273 | orchestrator | 2026-03-11 00:47:55 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:55.453327 | orchestrator | 2026-03-11 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:58.498458 | orchestrator | 2026-03-11 00:47:58 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:47:58.502173 | orchestrator | 2026-03-11 00:47:58 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:47:58.503481 | orchestrator | 2026-03-11 00:47:58 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state STARTED 2026-03-11 00:47:58.505870 | orchestrator | 2026-03-11 00:47:58 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:47:58.505924 | orchestrator | 2026-03-11 00:47:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:01.548856 | orchestrator | 2026-03-11 00:48:01 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:48:01.550192 | orchestrator | 2026-03-11 00:48:01 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:01.551246 | orchestrator | 2026-03-11 00:48:01 | INFO  | Task a003fa72-c3f9-482d-9ce5-231388788e2a is in state SUCCESS 2026-03-11 00:48:01.551846 | orchestrator | 2026-03-11 00:48:01.551917 | orchestrator | 2026-03-11 00:48:01.551927 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:48:01.551934 | orchestrator | 2026-03-11 00:48:01.551939 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:48:01.551944 | orchestrator | Wednesday 11 March 2026 00:46:26 +0000 (0:00:00.843) 0:00:00.843 ******* 2026-03-11 00:48:01.551951 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-11 00:48:01.551957 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-11 00:48:01.551963 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-11 00:48:01.551968 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-11 00:48:01.551973 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-11 00:48:01.551979 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-11 00:48:01.551984 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-11 00:48:01.551989 | orchestrator | 2026-03-11 00:48:01.551995 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-11 00:48:01.552000 | orchestrator | 2026-03-11 00:48:01.552005 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-11 00:48:01.552010 | orchestrator | Wednesday 11 March 2026 00:46:28 +0000 (0:00:02.300) 0:00:03.144 ******* 2026-03-11 00:48:01.552033 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:48:01.552056 | orchestrator | 2026-03-11 00:48:01.552060 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-11 00:48:01.552063 | orchestrator | Wednesday 11 March 2026 00:46:30 +0000 (0:00:01.193) 0:00:04.337 ******* 2026-03-11 00:48:01.552066 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:48:01.552070 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:48:01.552073 | orchestrator | ok: [testbed-manager] 2026-03-11 00:48:01.552077 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:48:01.552082 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:48:01.552090 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:48:01.552096 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:48:01.552101 | orchestrator | 2026-03-11 00:48:01.552106 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-11 00:48:01.552111 | orchestrator | Wednesday 11 March 2026 00:46:32 +0000 (0:00:02.169) 0:00:06.507 ******* 2026-03-11 00:48:01.552116 | orchestrator | ok: [testbed-manager] 2026-03-11 00:48:01.552120 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:48:01.552125 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:48:01.552130 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:48:01.552135 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:48:01.552140 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:48:01.552146 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:48:01.552152 | orchestrator | 2026-03-11 00:48:01.552157 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-11 00:48:01.552162 | orchestrator | Wednesday 11 March 2026 00:46:36 +0000 (0:00:03.996) 0:00:10.503 ******* 2026-03-11 00:48:01.552167 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:01.552172 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:01.552177 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:01.552182 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:01.552187 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:01.552192 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:01.552197 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:01.552202 | orchestrator | 2026-03-11 00:48:01.552207 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-11 00:48:01.552212 | orchestrator | Wednesday 11 March 2026 00:46:38 +0000 (0:00:01.992) 0:00:12.496 ******* 2026-03-11 00:48:01.552217 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:01.552222 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:01.552227 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:01.552232 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:01.552236 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:01.552241 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:01.552253 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:01.552259 | orchestrator | 2026-03-11 00:48:01.552265 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-11 00:48:01.552270 | orchestrator | Wednesday 11 March 2026 00:46:49 +0000 (0:00:11.444) 0:00:23.941 ******* 2026-03-11 00:48:01.552276 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:01.552281 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:01.552287 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:01.552292 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:01.552296 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:01.552301 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:01.552306 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:01.552311 | orchestrator | 2026-03-11 00:48:01.552316 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-11 00:48:01.552322 | orchestrator | Wednesday 11 March 2026 00:47:29 +0000 (0:00:40.058) 0:01:03.999 ******* 2026-03-11 00:48:01.552327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:48:01.552339 | orchestrator | 2026-03-11 00:48:01.552343 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-11 00:48:01.552346 | orchestrator | Wednesday 11 March 2026 00:47:31 +0000 (0:00:01.627) 0:01:05.627 ******* 2026-03-11 00:48:01.552349 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-11 00:48:01.552353 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-11 00:48:01.552356 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-11 00:48:01.552360 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-11 00:48:01.552375 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-11 00:48:01.552381 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-11 00:48:01.552386 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-11 00:48:01.552392 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-11 00:48:01.552398 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-11 00:48:01.552404 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-11 00:48:01.552410 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-11 00:48:01.552415 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-11 00:48:01.552421 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-11 00:48:01.552426 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-11 00:48:01.552432 | orchestrator | 2026-03-11 00:48:01.552437 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-11 00:48:01.552443 | orchestrator | Wednesday 11 March 2026 00:47:37 +0000 (0:00:06.038) 0:01:11.665 ******* 2026-03-11 00:48:01.552450 | orchestrator | ok: [testbed-manager] 2026-03-11 00:48:01.552455 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:48:01.552461 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:48:01.552466 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:48:01.552472 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:48:01.552478 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:48:01.552484 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:48:01.552490 | orchestrator | 2026-03-11 00:48:01.552496 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-11 00:48:01.552502 | orchestrator | Wednesday 11 March 2026 00:47:38 +0000 (0:00:01.199) 0:01:12.865 ******* 2026-03-11 00:48:01.552512 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:01.552518 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:01.552524 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:01.552530 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:01.552536 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:01.552541 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:01.552547 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:01.552552 | orchestrator | 2026-03-11 00:48:01.552557 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-11 00:48:01.552563 | orchestrator | Wednesday 11 March 2026 00:47:39 +0000 (0:00:01.251) 0:01:14.117 ******* 2026-03-11 00:48:01.552568 | orchestrator | ok: [testbed-manager] 2026-03-11 00:48:01.552574 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:48:01.552579 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:48:01.552585 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:48:01.552591 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:48:01.552596 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:48:01.552602 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:48:01.552607 | orchestrator | 2026-03-11 00:48:01.552613 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-11 00:48:01.552618 | orchestrator | Wednesday 11 March 2026 00:47:41 +0000 (0:00:01.813) 0:01:15.931 ******* 2026-03-11 00:48:01.552624 | orchestrator | ok: [testbed-manager] 2026-03-11 00:48:01.552629 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:48:01.552638 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:48:01.552644 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:48:01.552649 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:48:01.552654 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:48:01.552659 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:48:01.552665 | orchestrator | 2026-03-11 00:48:01.552669 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-11 00:48:01.552672 | orchestrator | Wednesday 11 March 2026 00:47:44 +0000 (0:00:03.079) 0:01:19.010 ******* 2026-03-11 00:48:01.552678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-11 00:48:01.552685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:48:01.552691 | orchestrator | 2026-03-11 00:48:01.552697 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-11 00:48:01.552729 | orchestrator | Wednesday 11 March 2026 00:47:46 +0000 (0:00:01.579) 0:01:20.589 ******* 2026-03-11 00:48:01.552735 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:01.552741 | orchestrator | 2026-03-11 00:48:01.552746 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-11 00:48:01.552751 | orchestrator | Wednesday 11 March 2026 00:47:48 +0000 (0:00:02.049) 0:01:22.639 ******* 2026-03-11 00:48:01.552756 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:01.552761 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:01.552767 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:01.552772 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:01.552777 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:01.552783 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:01.552788 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:01.552793 | orchestrator | 2026-03-11 00:48:01.552798 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:48:01.552803 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:48:01.552809 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:48:01.552815 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:48:01.552819 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:48:01.552827 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:48:01.552830 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:48:01.552834 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:48:01.552838 | orchestrator | 2026-03-11 00:48:01.552843 | orchestrator | 2026-03-11 00:48:01.552848 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:48:01.552852 | orchestrator | Wednesday 11 March 2026 00:47:59 +0000 (0:00:11.272) 0:01:33.911 ******* 2026-03-11 00:48:01.552857 | orchestrator | =============================================================================== 2026-03-11 00:48:01.552861 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 40.06s 2026-03-11 00:48:01.552866 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.44s 2026-03-11 00:48:01.552871 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.27s 2026-03-11 00:48:01.552881 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.04s 2026-03-11 00:48:01.552886 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.00s 2026-03-11 00:48:01.552892 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.08s 2026-03-11 00:48:01.552897 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.30s 2026-03-11 00:48:01.552905 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.17s 2026-03-11 00:48:01.552910 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.05s 2026-03-11 00:48:01.552914 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.99s 2026-03-11 00:48:01.552919 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.81s 2026-03-11 00:48:01.552924 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.63s 2026-03-11 00:48:01.552929 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.58s 2026-03-11 00:48:01.552933 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.25s 2026-03-11 00:48:01.552938 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.20s 2026-03-11 00:48:01.552944 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.19s 2026-03-11 00:48:01.554245 | orchestrator | 2026-03-11 00:48:01 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:01.554937 | orchestrator | 2026-03-11 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:04.599460 | orchestrator | 2026-03-11 00:48:04 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:48:04.602958 | orchestrator | 2026-03-11 00:48:04 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:04.604651 | orchestrator | 2026-03-11 00:48:04 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:04.605188 | orchestrator | 2026-03-11 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:07.670078 | orchestrator | 2026-03-11 00:48:07 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:48:07.672079 | orchestrator | 2026-03-11 00:48:07 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:07.674747 | orchestrator | 2026-03-11 00:48:07 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:07.674827 | orchestrator | 2026-03-11 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:10.728969 | orchestrator | 2026-03-11 00:48:10 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:48:10.730766 | orchestrator | 2026-03-11 00:48:10 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:10.732078 | orchestrator | 2026-03-11 00:48:10 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:10.733622 | orchestrator | 2026-03-11 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:13.782671 | orchestrator | 2026-03-11 00:48:13 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:48:13.782789 | orchestrator | 2026-03-11 00:48:13 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:13.783482 | orchestrator | 2026-03-11 00:48:13 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:13.783534 | orchestrator | 2026-03-11 00:48:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:16.840872 | orchestrator | 2026-03-11 00:48:16 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:48:16.842972 | orchestrator | 2026-03-11 00:48:16 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:16.846268 | orchestrator | 2026-03-11 00:48:16 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:16.846577 | orchestrator | 2026-03-11 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:19.896450 | orchestrator | 2026-03-11 00:48:19 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:48:19.898782 | orchestrator | 2026-03-11 00:48:19 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:19.900479 | orchestrator | 2026-03-11 00:48:19 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:19.900567 | orchestrator | 2026-03-11 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:22.933590 | orchestrator | 2026-03-11 00:48:22 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:48:22.934516 | orchestrator | 2026-03-11 00:48:22 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:22.936950 | orchestrator | 2026-03-11 00:48:22 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:22.937048 | orchestrator | 2026-03-11 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:25.982848 | orchestrator | 2026-03-11 00:48:25 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:48:25.985594 | orchestrator | 2026-03-11 00:48:25 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:25.988301 | orchestrator | 2026-03-11 00:48:25 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:25.988348 | orchestrator | 2026-03-11 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:29.029546 | orchestrator | 2026-03-11 00:48:29 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:48:29.029845 | orchestrator | 2026-03-11 00:48:29 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:29.031825 | orchestrator | 2026-03-11 00:48:29 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:29.031862 | orchestrator | 2026-03-11 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:32.062961 | orchestrator | 2026-03-11 00:48:32 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:48:32.065025 | orchestrator | 2026-03-11 00:48:32 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:32.065048 | orchestrator | 2026-03-11 00:48:32 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:32.065052 | orchestrator | 2026-03-11 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:35.101152 | orchestrator | 2026-03-11 00:48:35 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:48:35.104384 | orchestrator | 2026-03-11 00:48:35 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:35.105437 | orchestrator | 2026-03-11 00:48:35 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:35.105487 | orchestrator | 2026-03-11 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:38.147655 | orchestrator | 2026-03-11 00:48:38 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:48:38.149429 | orchestrator | 2026-03-11 00:48:38 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:38.151739 | orchestrator | 2026-03-11 00:48:38 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:38.153308 | orchestrator | 2026-03-11 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:41.202604 | orchestrator | 2026-03-11 00:48:41 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:48:41.203850 | orchestrator | 2026-03-11 00:48:41 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:41.205273 | orchestrator | 2026-03-11 00:48:41 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:41.205309 | orchestrator | 2026-03-11 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:44.238949 | orchestrator | 2026-03-11 00:48:44 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state STARTED 2026-03-11 00:48:44.239128 | orchestrator | 2026-03-11 00:48:44 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:44.240057 | orchestrator | 2026-03-11 00:48:44 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:44.240081 | orchestrator | 2026-03-11 00:48:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:47.271877 | orchestrator | 2026-03-11 00:48:47.271970 | orchestrator | 2026-03-11 00:48:47 | INFO  | Task da7ba9c1-10d3-45bc-bf8a-07daf4ee0c96 is in state SUCCESS 2026-03-11 00:48:47.273597 | orchestrator | 2026-03-11 00:48:47.273661 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-11 00:48:47.273671 | orchestrator | 2026-03-11 00:48:47.273679 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-11 00:48:47.273687 | orchestrator | Wednesday 11 March 2026 00:46:19 +0000 (0:00:00.251) 0:00:00.251 ******* 2026-03-11 00:48:47.273696 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:48:47.273764 | orchestrator | 2026-03-11 00:48:47.273772 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-11 00:48:47.273780 | orchestrator | Wednesday 11 March 2026 00:46:20 +0000 (0:00:01.323) 0:00:01.575 ******* 2026-03-11 00:48:47.273788 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-11 00:48:47.273802 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-11 00:48:47.273810 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-11 00:48:47.273818 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-11 00:48:47.273825 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-11 00:48:47.273833 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-11 00:48:47.273841 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-11 00:48:47.273848 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-11 00:48:47.273855 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-11 00:48:47.273862 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-11 00:48:47.273871 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-11 00:48:47.273878 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-11 00:48:47.273885 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-11 00:48:47.273914 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-11 00:48:47.273922 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-11 00:48:47.273929 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-11 00:48:47.273937 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-11 00:48:47.273944 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-11 00:48:47.273952 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-11 00:48:47.273959 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-11 00:48:47.273966 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-11 00:48:47.273974 | orchestrator | 2026-03-11 00:48:47.273981 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-11 00:48:47.273988 | orchestrator | Wednesday 11 March 2026 00:46:25 +0000 (0:00:04.563) 0:00:06.139 ******* 2026-03-11 00:48:47.273995 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:48:47.274004 | orchestrator | 2026-03-11 00:48:47.274011 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-11 00:48:47.274097 | orchestrator | Wednesday 11 March 2026 00:46:26 +0000 (0:00:01.115) 0:00:07.255 ******* 2026-03-11 00:48:47.274110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.274120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.274165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.274184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.274192 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.274207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.274214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.274221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.274229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.274312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.274328 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.274342 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.274350 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.274357 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.274369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.274378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.274405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.274414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.274423 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.274436 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.274444 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.274452 | orchestrator | 2026-03-11 00:48:47.274459 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-11 00:48:47.274467 | orchestrator | Wednesday 11 March 2026 00:46:31 +0000 (0:00:05.460) 0:00:12.715 ******* 2026-03-11 00:48:47.274474 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:47.274482 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274489 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:47.274518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:47.274548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274563 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:48:47.274570 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:48:47.274576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:47.274583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274607 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:48:47.274617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:47.274625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274638 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:48:47.274645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:47.274651 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:48:47.274657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274672 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:48:47.274690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:47.274722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274738 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:48:47.274746 | orchestrator | 2026-03-11 00:48:47.274753 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-11 00:48:47.274761 | orchestrator | Wednesday 11 March 2026 00:46:33 +0000 (0:00:01.462) 0:00:14.178 ******* 2026-03-11 00:48:47.274768 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:47.274782 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274790 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274797 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:48:47.274804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:47.274815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274834 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:48:47.274844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:47.274851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274865 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:48:47.274873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:47.274880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274898 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:48:47.274910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:47.274924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274940 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:48:47.274948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:47.274956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.274971 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:48:47.274978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:47.274995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.275002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.275008 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:48:47.275015 | orchestrator | 2026-03-11 00:48:47.275022 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-11 00:48:47.275032 | orchestrator | Wednesday 11 March 2026 00:46:35 +0000 (0:00:02.539) 0:00:16.717 ******* 2026-03-11 00:48:47.275039 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:48:47.275045 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:48:47.275052 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:48:47.275059 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:48:47.275066 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:48:47.275073 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:48:47.275080 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:48:47.275087 | orchestrator | 2026-03-11 00:48:47.275094 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-11 00:48:47.275102 | orchestrator | Wednesday 11 March 2026 00:46:36 +0000 (0:00:00.675) 0:00:17.393 ******* 2026-03-11 00:48:47.275109 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:48:47.275116 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:48:47.275123 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:48:47.275172 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:48:47.275181 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:48:47.275187 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:48:47.275195 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:48:47.275202 | orchestrator | 2026-03-11 00:48:47.275225 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-11 00:48:47.275233 | orchestrator | Wednesday 11 March 2026 00:46:37 +0000 (0:00:01.186) 0:00:18.579 ******* 2026-03-11 00:48:47.275241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.275249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.275262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.275270 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.275286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.275298 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.275306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.275313 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.275321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.275333 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.275341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.275352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.275360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.275372 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.275380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.275387 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.275399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.275406 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.275414 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.275425 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.275433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.275440 | orchestrator | 2026-03-11 00:48:47.275447 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-11 00:48:47.275468 | orchestrator | Wednesday 11 March 2026 00:46:45 +0000 (0:00:07.527) 0:00:26.107 ******* 2026-03-11 00:48:47.275475 | orchestrator | [WARNING]: Skipped 2026-03-11 00:48:47.275487 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-11 00:48:47.275495 | orchestrator | to this access issue: 2026-03-11 00:48:47.275503 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-11 00:48:47.275510 | orchestrator | directory 2026-03-11 00:48:47.275518 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 00:48:47.275525 | orchestrator | 2026-03-11 00:48:47.275532 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-11 00:48:47.275539 | orchestrator | Wednesday 11 March 2026 00:46:47 +0000 (0:00:01.952) 0:00:28.059 ******* 2026-03-11 00:48:47.275547 | orchestrator | [WARNING]: Skipped 2026-03-11 00:48:47.275554 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-11 00:48:47.275565 | orchestrator | to this access issue: 2026-03-11 00:48:47.275572 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-11 00:48:47.275577 | orchestrator | directory 2026-03-11 00:48:47.275583 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 00:48:47.275589 | orchestrator | 2026-03-11 00:48:47.275595 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-11 00:48:47.275601 | orchestrator | Wednesday 11 March 2026 00:46:48 +0000 (0:00:00.827) 0:00:28.887 ******* 2026-03-11 00:48:47.275608 | orchestrator | [WARNING]: Skipped 2026-03-11 00:48:47.275615 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-11 00:48:47.275623 | orchestrator | to this access issue: 2026-03-11 00:48:47.275630 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-11 00:48:47.275637 | orchestrator | directory 2026-03-11 00:48:47.275644 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 00:48:47.275651 | orchestrator | 2026-03-11 00:48:47.275659 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-11 00:48:47.275666 | orchestrator | Wednesday 11 March 2026 00:46:48 +0000 (0:00:00.671) 0:00:29.559 ******* 2026-03-11 00:48:47.275673 | orchestrator | [WARNING]: Skipped 2026-03-11 00:48:47.275680 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-11 00:48:47.275687 | orchestrator | to this access issue: 2026-03-11 00:48:47.275695 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-11 00:48:47.275715 | orchestrator | directory 2026-03-11 00:48:47.275722 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 00:48:47.275728 | orchestrator | 2026-03-11 00:48:47.275734 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-11 00:48:47.275740 | orchestrator | Wednesday 11 March 2026 00:46:49 +0000 (0:00:00.801) 0:00:30.361 ******* 2026-03-11 00:48:47.275747 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:47.275754 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:47.275761 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:47.275769 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:47.275776 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:47.275783 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:47.275791 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:47.275798 | orchestrator | 2026-03-11 00:48:47.275805 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-11 00:48:47.275813 | orchestrator | Wednesday 11 March 2026 00:46:53 +0000 (0:00:04.220) 0:00:34.581 ******* 2026-03-11 00:48:47.275820 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-11 00:48:47.275828 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-11 00:48:47.275835 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-11 00:48:47.275842 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-11 00:48:47.275850 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-11 00:48:47.275857 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-11 00:48:47.275864 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-11 00:48:47.275871 | orchestrator | 2026-03-11 00:48:47.275879 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-11 00:48:47.275886 | orchestrator | Wednesday 11 March 2026 00:46:57 +0000 (0:00:04.132) 0:00:38.713 ******* 2026-03-11 00:48:47.275894 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:47.275901 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:47.275917 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:47.275924 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:47.275937 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:47.275944 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:47.275952 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:47.275959 | orchestrator | 2026-03-11 00:48:47.275966 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-11 00:48:47.275973 | orchestrator | Wednesday 11 March 2026 00:47:02 +0000 (0:00:04.323) 0:00:43.037 ******* 2026-03-11 00:48:47.275985 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.275994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.276002 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.276010 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.276018 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.276025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.276179 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.276191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.276201 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276217 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276228 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.276235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.276242 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276249 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.276331 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.276356 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.276363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:47.276370 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276437 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276448 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276462 | orchestrator | 2026-03-11 00:48:47.276470 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-11 00:48:47.276477 | orchestrator | Wednesday 11 March 2026 00:47:04 +0000 (0:00:02.646) 0:00:45.683 ******* 2026-03-11 00:48:47.276484 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-11 00:48:47.276492 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-11 00:48:47.276499 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-11 00:48:47.276506 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-11 00:48:47.276512 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-11 00:48:47.276520 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-11 00:48:47.276527 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-11 00:48:47.276534 | orchestrator | 2026-03-11 00:48:47.276546 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-11 00:48:47.276554 | orchestrator | Wednesday 11 March 2026 00:47:07 +0000 (0:00:02.614) 0:00:48.297 ******* 2026-03-11 00:48:47.276562 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-11 00:48:47.276568 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-11 00:48:47.276573 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-11 00:48:47.276579 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-11 00:48:47.276585 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-11 00:48:47.276591 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-11 00:48:47.276603 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-11 00:48:47.276611 | orchestrator | 2026-03-11 00:48:47.276618 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-11 00:48:47.276626 | orchestrator | Wednesday 11 March 2026 00:47:09 +0000 (0:00:02.301) 0:00:50.599 ******* 2026-03-11 00:48:47.276634 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.276642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.276649 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.276670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.276682 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.276691 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276760 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.276769 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:47.276781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276820 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276828 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276871 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:47.276878 | orchestrator | 2026-03-11 00:48:47.276885 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-11 00:48:47.276893 | orchestrator | Wednesday 11 March 2026 00:47:13 +0000 (0:00:03.624) 0:00:54.224 ******* 2026-03-11 00:48:47.276904 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:47.276911 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:47.276919 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:47.276926 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:47.276933 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:47.276940 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:47.276947 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:47.276955 | orchestrator | 2026-03-11 00:48:47.276962 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-11 00:48:47.276969 | orchestrator | Wednesday 11 March 2026 00:47:15 +0000 (0:00:02.083) 0:00:56.307 ******* 2026-03-11 00:48:47.276977 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:47.276984 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:47.276991 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:47.276998 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:47.277006 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:47.277013 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:47.277020 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:47.277027 | orchestrator | 2026-03-11 00:48:47.277037 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-11 00:48:47.277045 | orchestrator | Wednesday 11 March 2026 00:47:16 +0000 (0:00:01.009) 0:00:57.317 ******* 2026-03-11 00:48:47.277052 | orchestrator | 2026-03-11 00:48:47.277059 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-11 00:48:47.277067 | orchestrator | Wednesday 11 March 2026 00:47:16 +0000 (0:00:00.073) 0:00:57.390 ******* 2026-03-11 00:48:47.277073 | orchestrator | 2026-03-11 00:48:47.277081 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-11 00:48:47.277093 | orchestrator | Wednesday 11 March 2026 00:47:16 +0000 (0:00:00.069) 0:00:57.459 ******* 2026-03-11 00:48:47.277101 | orchestrator | 2026-03-11 00:48:47.277108 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-11 00:48:47.277115 | orchestrator | Wednesday 11 March 2026 00:47:16 +0000 (0:00:00.168) 0:00:57.628 ******* 2026-03-11 00:48:47.277122 | orchestrator | 2026-03-11 00:48:47.277129 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-11 00:48:47.277137 | orchestrator | Wednesday 11 March 2026 00:47:16 +0000 (0:00:00.048) 0:00:57.677 ******* 2026-03-11 00:48:47.277144 | orchestrator | 2026-03-11 00:48:47.277151 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-11 00:48:47.277158 | orchestrator | Wednesday 11 March 2026 00:47:16 +0000 (0:00:00.049) 0:00:57.726 ******* 2026-03-11 00:48:47.277165 | orchestrator | 2026-03-11 00:48:47.277173 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-11 00:48:47.277180 | orchestrator | Wednesday 11 March 2026 00:47:16 +0000 (0:00:00.047) 0:00:57.773 ******* 2026-03-11 00:48:47.277187 | orchestrator | 2026-03-11 00:48:47.277194 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-11 00:48:47.277201 | orchestrator | Wednesday 11 March 2026 00:47:16 +0000 (0:00:00.069) 0:00:57.843 ******* 2026-03-11 00:48:47.277208 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:47.277216 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:47.277224 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:47.277231 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:47.277238 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:47.277245 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:47.277251 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:47.277257 | orchestrator | 2026-03-11 00:48:47.277263 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-11 00:48:47.277268 | orchestrator | Wednesday 11 March 2026 00:47:58 +0000 (0:00:41.322) 0:01:39.165 ******* 2026-03-11 00:48:47.277274 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:47.277280 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:47.277287 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:47.277293 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:47.277301 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:47.277308 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:47.277315 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:47.277322 | orchestrator | 2026-03-11 00:48:47.277340 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-11 00:48:47.277348 | orchestrator | Wednesday 11 March 2026 00:48:34 +0000 (0:00:35.883) 0:02:15.048 ******* 2026-03-11 00:48:47.277356 | orchestrator | ok: [testbed-manager] 2026-03-11 00:48:47.277363 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:48:47.277371 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:48:47.277378 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:48:47.277385 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:48:47.277392 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:48:47.277399 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:48:47.277406 | orchestrator | 2026-03-11 00:48:47.277413 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-11 00:48:47.277421 | orchestrator | Wednesday 11 March 2026 00:48:36 +0000 (0:00:02.057) 0:02:17.106 ******* 2026-03-11 00:48:47.277428 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:47.277435 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:47.277442 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:47.277449 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:47.277457 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:47.277464 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:47.277471 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:47.277478 | orchestrator | 2026-03-11 00:48:47.277485 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:48:47.277493 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-11 00:48:47.277507 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-11 00:48:47.277514 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-11 00:48:47.277526 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-11 00:48:47.277534 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-11 00:48:47.277541 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-11 00:48:47.277548 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-11 00:48:47.277555 | orchestrator | 2026-03-11 00:48:47.277563 | orchestrator | 2026-03-11 00:48:47.277569 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:48:47.277578 | orchestrator | Wednesday 11 March 2026 00:48:45 +0000 (0:00:09.383) 0:02:26.490 ******* 2026-03-11 00:48:47.277585 | orchestrator | =============================================================================== 2026-03-11 00:48:47.277591 | orchestrator | common : Restart fluentd container ------------------------------------- 41.32s 2026-03-11 00:48:47.277598 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.88s 2026-03-11 00:48:47.277605 | orchestrator | common : Restart cron container ----------------------------------------- 9.38s 2026-03-11 00:48:47.277612 | orchestrator | common : Copying over config.json files for services -------------------- 7.53s 2026-03-11 00:48:47.277619 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.46s 2026-03-11 00:48:47.277626 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.56s 2026-03-11 00:48:47.277633 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.32s 2026-03-11 00:48:47.277640 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.22s 2026-03-11 00:48:47.277647 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.13s 2026-03-11 00:48:47.277654 | orchestrator | common : Check common containers ---------------------------------------- 3.62s 2026-03-11 00:48:47.277661 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.65s 2026-03-11 00:48:47.277669 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.61s 2026-03-11 00:48:47.277676 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.54s 2026-03-11 00:48:47.277683 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.30s 2026-03-11 00:48:47.277690 | orchestrator | common : Creating log volume -------------------------------------------- 2.08s 2026-03-11 00:48:47.277697 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.06s 2026-03-11 00:48:47.277724 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.95s 2026-03-11 00:48:47.277731 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.46s 2026-03-11 00:48:47.277738 | orchestrator | common : include_tasks -------------------------------------------------- 1.32s 2026-03-11 00:48:47.277746 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.19s 2026-03-11 00:48:47.277753 | orchestrator | 2026-03-11 00:48:47 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:47.277760 | orchestrator | 2026-03-11 00:48:47 | INFO  | Task ab1f9b2b-1ca4-44e6-9d21-9a5c859c9445 is in state STARTED 2026-03-11 00:48:47.277773 | orchestrator | 2026-03-11 00:48:47 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:48:47.277781 | orchestrator | 2026-03-11 00:48:47 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:47.277788 | orchestrator | 2026-03-11 00:48:47 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:48:47.277795 | orchestrator | 2026-03-11 00:48:47 | INFO  | Task 03347c7e-b099-4130-9624-2bdc0d0fea8f is in state STARTED 2026-03-11 00:48:47.277802 | orchestrator | 2026-03-11 00:48:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:50.320532 | orchestrator | 2026-03-11 00:48:50 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:50.320619 | orchestrator | 2026-03-11 00:48:50 | INFO  | Task ab1f9b2b-1ca4-44e6-9d21-9a5c859c9445 is in state STARTED 2026-03-11 00:48:50.320630 | orchestrator | 2026-03-11 00:48:50 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:48:50.320646 | orchestrator | 2026-03-11 00:48:50 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:50.321193 | orchestrator | 2026-03-11 00:48:50 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:48:50.323115 | orchestrator | 2026-03-11 00:48:50 | INFO  | Task 03347c7e-b099-4130-9624-2bdc0d0fea8f is in state STARTED 2026-03-11 00:48:50.323155 | orchestrator | 2026-03-11 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:53.357535 | orchestrator | 2026-03-11 00:48:53 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:53.360009 | orchestrator | 2026-03-11 00:48:53 | INFO  | Task ab1f9b2b-1ca4-44e6-9d21-9a5c859c9445 is in state STARTED 2026-03-11 00:48:53.360076 | orchestrator | 2026-03-11 00:48:53 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:48:53.360087 | orchestrator | 2026-03-11 00:48:53 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:53.360309 | orchestrator | 2026-03-11 00:48:53 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:48:53.362675 | orchestrator | 2026-03-11 00:48:53 | INFO  | Task 03347c7e-b099-4130-9624-2bdc0d0fea8f is in state STARTED 2026-03-11 00:48:53.362884 | orchestrator | 2026-03-11 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:56.416435 | orchestrator | 2026-03-11 00:48:56 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:56.416513 | orchestrator | 2026-03-11 00:48:56 | INFO  | Task ab1f9b2b-1ca4-44e6-9d21-9a5c859c9445 is in state STARTED 2026-03-11 00:48:56.416519 | orchestrator | 2026-03-11 00:48:56 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:48:56.416525 | orchestrator | 2026-03-11 00:48:56 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:56.416529 | orchestrator | 2026-03-11 00:48:56 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:48:56.416533 | orchestrator | 2026-03-11 00:48:56 | INFO  | Task 03347c7e-b099-4130-9624-2bdc0d0fea8f is in state STARTED 2026-03-11 00:48:56.416538 | orchestrator | 2026-03-11 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:59.460638 | orchestrator | 2026-03-11 00:48:59 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:48:59.460793 | orchestrator | 2026-03-11 00:48:59 | INFO  | Task ab1f9b2b-1ca4-44e6-9d21-9a5c859c9445 is in state STARTED 2026-03-11 00:48:59.460832 | orchestrator | 2026-03-11 00:48:59 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:48:59.460840 | orchestrator | 2026-03-11 00:48:59 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:48:59.460846 | orchestrator | 2026-03-11 00:48:59 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:48:59.461307 | orchestrator | 2026-03-11 00:48:59 | INFO  | Task 03347c7e-b099-4130-9624-2bdc0d0fea8f is in state STARTED 2026-03-11 00:48:59.461358 | orchestrator | 2026-03-11 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:02.498177 | orchestrator | 2026-03-11 00:49:02 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:02.498234 | orchestrator | 2026-03-11 00:49:02 | INFO  | Task ab1f9b2b-1ca4-44e6-9d21-9a5c859c9445 is in state STARTED 2026-03-11 00:49:02.500200 | orchestrator | 2026-03-11 00:49:02 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:02.500412 | orchestrator | 2026-03-11 00:49:02 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:02.500990 | orchestrator | 2026-03-11 00:49:02 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:02.501616 | orchestrator | 2026-03-11 00:49:02 | INFO  | Task 03347c7e-b099-4130-9624-2bdc0d0fea8f is in state STARTED 2026-03-11 00:49:02.501635 | orchestrator | 2026-03-11 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:05.549249 | orchestrator | 2026-03-11 00:49:05 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:05.549327 | orchestrator | 2026-03-11 00:49:05 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:05.549338 | orchestrator | 2026-03-11 00:49:05 | INFO  | Task ab1f9b2b-1ca4-44e6-9d21-9a5c859c9445 is in state STARTED 2026-03-11 00:49:05.549347 | orchestrator | 2026-03-11 00:49:05 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:05.549356 | orchestrator | 2026-03-11 00:49:05 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:05.549365 | orchestrator | 2026-03-11 00:49:05 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:05.549374 | orchestrator | 2026-03-11 00:49:05 | INFO  | Task 03347c7e-b099-4130-9624-2bdc0d0fea8f is in state SUCCESS 2026-03-11 00:49:05.549383 | orchestrator | 2026-03-11 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:08.591622 | orchestrator | 2026-03-11 00:49:08 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:08.591894 | orchestrator | 2026-03-11 00:49:08 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:08.596560 | orchestrator | 2026-03-11 00:49:08 | INFO  | Task ab1f9b2b-1ca4-44e6-9d21-9a5c859c9445 is in state STARTED 2026-03-11 00:49:08.597074 | orchestrator | 2026-03-11 00:49:08 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:08.597605 | orchestrator | 2026-03-11 00:49:08 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:08.598251 | orchestrator | 2026-03-11 00:49:08 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:08.598274 | orchestrator | 2026-03-11 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:11.661478 | orchestrator | 2026-03-11 00:49:11 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:11.662538 | orchestrator | 2026-03-11 00:49:11 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:11.662591 | orchestrator | 2026-03-11 00:49:11 | INFO  | Task ab1f9b2b-1ca4-44e6-9d21-9a5c859c9445 is in state STARTED 2026-03-11 00:49:11.662600 | orchestrator | 2026-03-11 00:49:11 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:11.662952 | orchestrator | 2026-03-11 00:49:11 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:11.662996 | orchestrator | 2026-03-11 00:49:11 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:11.663002 | orchestrator | 2026-03-11 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:14.737474 | orchestrator | 2026-03-11 00:49:14 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:14.737569 | orchestrator | 2026-03-11 00:49:14 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:14.737581 | orchestrator | 2026-03-11 00:49:14 | INFO  | Task ab1f9b2b-1ca4-44e6-9d21-9a5c859c9445 is in state STARTED 2026-03-11 00:49:14.737587 | orchestrator | 2026-03-11 00:49:14 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:14.737593 | orchestrator | 2026-03-11 00:49:14 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:14.737599 | orchestrator | 2026-03-11 00:49:14 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:14.737607 | orchestrator | 2026-03-11 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:17.737883 | orchestrator | 2026-03-11 00:49:17 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:17.741580 | orchestrator | 2026-03-11 00:49:17 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:17.742454 | orchestrator | 2026-03-11 00:49:17 | INFO  | Task ab1f9b2b-1ca4-44e6-9d21-9a5c859c9445 is in state STARTED 2026-03-11 00:49:17.743126 | orchestrator | 2026-03-11 00:49:17 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:17.743728 | orchestrator | 2026-03-11 00:49:17 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:17.744970 | orchestrator | 2026-03-11 00:49:17 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:17.745019 | orchestrator | 2026-03-11 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:20.777654 | orchestrator | 2026-03-11 00:49:20 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:20.777739 | orchestrator | 2026-03-11 00:49:20 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:20.786513 | orchestrator | 2026-03-11 00:49:20 | INFO  | Task ab1f9b2b-1ca4-44e6-9d21-9a5c859c9445 is in state STARTED 2026-03-11 00:49:20.786604 | orchestrator | 2026-03-11 00:49:20 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:20.788439 | orchestrator | 2026-03-11 00:49:20 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:20.788504 | orchestrator | 2026-03-11 00:49:20 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:20.788515 | orchestrator | 2026-03-11 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:23.816874 | orchestrator | 2026-03-11 00:49:23 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:23.817604 | orchestrator | 2026-03-11 00:49:23 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:23.817953 | orchestrator | 2026-03-11 00:49:23 | INFO  | Task ab1f9b2b-1ca4-44e6-9d21-9a5c859c9445 is in state STARTED 2026-03-11 00:49:23.818637 | orchestrator | 2026-03-11 00:49:23 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:23.820048 | orchestrator | 2026-03-11 00:49:23 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:23.823612 | orchestrator | 2026-03-11 00:49:23 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:23.823707 | orchestrator | 2026-03-11 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:26.864131 | orchestrator | 2026-03-11 00:49:26 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:26.864276 | orchestrator | 2026-03-11 00:49:26 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:26.865124 | orchestrator | 2026-03-11 00:49:26 | INFO  | Task ab1f9b2b-1ca4-44e6-9d21-9a5c859c9445 is in state SUCCESS 2026-03-11 00:49:26.866521 | orchestrator | 2026-03-11 00:49:26.866681 | orchestrator | 2026-03-11 00:49:26.866704 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:49:26.866709 | orchestrator | 2026-03-11 00:49:26.866714 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:49:26.866718 | orchestrator | Wednesday 11 March 2026 00:48:52 +0000 (0:00:00.285) 0:00:00.285 ******* 2026-03-11 00:49:26.866723 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:49:26.866728 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:49:26.866735 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:49:26.866741 | orchestrator | 2026-03-11 00:49:26.866756 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:49:26.866764 | orchestrator | Wednesday 11 March 2026 00:48:52 +0000 (0:00:00.361) 0:00:00.646 ******* 2026-03-11 00:49:26.866774 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-11 00:49:26.866833 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-11 00:49:26.866839 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-11 00:49:26.866846 | orchestrator | 2026-03-11 00:49:26.866853 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-11 00:49:26.866859 | orchestrator | 2026-03-11 00:49:26.866866 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-11 00:49:26.866872 | orchestrator | Wednesday 11 March 2026 00:48:53 +0000 (0:00:00.622) 0:00:01.269 ******* 2026-03-11 00:49:26.866879 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:49:26.866888 | orchestrator | 2026-03-11 00:49:26.866894 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-11 00:49:26.866901 | orchestrator | Wednesday 11 March 2026 00:48:53 +0000 (0:00:00.862) 0:00:02.132 ******* 2026-03-11 00:49:26.866910 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-11 00:49:26.866917 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-11 00:49:26.866926 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-11 00:49:26.866932 | orchestrator | 2026-03-11 00:49:26.866938 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-11 00:49:26.866944 | orchestrator | Wednesday 11 March 2026 00:48:55 +0000 (0:00:01.125) 0:00:03.257 ******* 2026-03-11 00:49:26.866951 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-11 00:49:26.866957 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-11 00:49:26.866964 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-11 00:49:26.866970 | orchestrator | 2026-03-11 00:49:26.866976 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-11 00:49:26.867001 | orchestrator | Wednesday 11 March 2026 00:48:57 +0000 (0:00:02.692) 0:00:05.949 ******* 2026-03-11 00:49:26.867008 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:49:26.867013 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:49:26.867019 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:49:26.867025 | orchestrator | 2026-03-11 00:49:26.867030 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-11 00:49:26.867036 | orchestrator | Wednesday 11 March 2026 00:49:00 +0000 (0:00:02.198) 0:00:08.148 ******* 2026-03-11 00:49:26.867042 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:49:26.867047 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:49:26.867053 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:49:26.867058 | orchestrator | 2026-03-11 00:49:26.867063 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:49:26.867069 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:49:26.867077 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:49:26.867082 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:49:26.867089 | orchestrator | 2026-03-11 00:49:26.867095 | orchestrator | 2026-03-11 00:49:26.867100 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:49:26.867106 | orchestrator | Wednesday 11 March 2026 00:49:02 +0000 (0:00:02.263) 0:00:10.411 ******* 2026-03-11 00:49:26.867112 | orchestrator | =============================================================================== 2026-03-11 00:49:26.867118 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.69s 2026-03-11 00:49:26.867124 | orchestrator | memcached : Restart memcached container --------------------------------- 2.26s 2026-03-11 00:49:26.867130 | orchestrator | memcached : Check memcached container ----------------------------------- 2.20s 2026-03-11 00:49:26.867136 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.13s 2026-03-11 00:49:26.867141 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.86s 2026-03-11 00:49:26.867147 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2026-03-11 00:49:26.867159 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-03-11 00:49:26.867165 | orchestrator | 2026-03-11 00:49:26.867171 | orchestrator | 2026-03-11 00:49:26.867177 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:49:26.867183 | orchestrator | 2026-03-11 00:49:26.867190 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:49:26.867197 | orchestrator | Wednesday 11 March 2026 00:48:52 +0000 (0:00:00.472) 0:00:00.473 ******* 2026-03-11 00:49:26.867204 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:49:26.867209 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:49:26.867216 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:49:26.867222 | orchestrator | 2026-03-11 00:49:26.867229 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:49:26.867264 | orchestrator | Wednesday 11 March 2026 00:48:52 +0000 (0:00:00.504) 0:00:00.977 ******* 2026-03-11 00:49:26.867278 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-11 00:49:26.867284 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-11 00:49:26.867291 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-11 00:49:26.867297 | orchestrator | 2026-03-11 00:49:26.867303 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-11 00:49:26.867309 | orchestrator | 2026-03-11 00:49:26.867315 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-11 00:49:26.867322 | orchestrator | Wednesday 11 March 2026 00:48:53 +0000 (0:00:00.879) 0:00:01.857 ******* 2026-03-11 00:49:26.867337 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:49:26.867344 | orchestrator | 2026-03-11 00:49:26.867351 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-11 00:49:26.867357 | orchestrator | Wednesday 11 March 2026 00:48:54 +0000 (0:00:00.865) 0:00:02.723 ******* 2026-03-11 00:49:26.867366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867423 | orchestrator | 2026-03-11 00:49:26.867427 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-11 00:49:26.867432 | orchestrator | Wednesday 11 March 2026 00:48:56 +0000 (0:00:01.872) 0:00:04.596 ******* 2026-03-11 00:49:26.867436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867479 | orchestrator | 2026-03-11 00:49:26.867483 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-11 00:49:26.867488 | orchestrator | Wednesday 11 March 2026 00:48:59 +0000 (0:00:03.240) 0:00:07.836 ******* 2026-03-11 00:49:26.867492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867531 | orchestrator | 2026-03-11 00:49:26.867535 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-11 00:49:26.867540 | orchestrator | Wednesday 11 March 2026 00:49:02 +0000 (0:00:02.872) 0:00:10.708 ******* 2026-03-11 00:49:26.867544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:49:26.867581 | orchestrator | 2026-03-11 00:49:26.867585 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-11 00:49:26.867589 | orchestrator | Wednesday 11 March 2026 00:49:05 +0000 (0:00:02.521) 0:00:13.230 ******* 2026-03-11 00:49:26.867594 | orchestrator | 2026-03-11 00:49:26.867598 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-11 00:49:26.867606 | orchestrator | Wednesday 11 March 2026 00:49:05 +0000 (0:00:00.225) 0:00:13.455 ******* 2026-03-11 00:49:26.867610 | orchestrator | 2026-03-11 00:49:26.867614 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-11 00:49:26.867618 | orchestrator | Wednesday 11 March 2026 00:49:05 +0000 (0:00:00.226) 0:00:13.682 ******* 2026-03-11 00:49:26.867621 | orchestrator | 2026-03-11 00:49:26.867625 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-11 00:49:26.867629 | orchestrator | Wednesday 11 March 2026 00:49:05 +0000 (0:00:00.215) 0:00:13.897 ******* 2026-03-11 00:49:26.867632 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:49:26.867636 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:49:26.867640 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:49:26.867644 | orchestrator | 2026-03-11 00:49:26.867648 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-11 00:49:26.867651 | orchestrator | Wednesday 11 March 2026 00:49:15 +0000 (0:00:09.253) 0:00:23.151 ******* 2026-03-11 00:49:26.867655 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:49:26.867659 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:49:26.867663 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:49:26.867666 | orchestrator | 2026-03-11 00:49:26.867670 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:49:26.867674 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:49:26.867678 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:49:26.867682 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:49:26.867686 | orchestrator | 2026-03-11 00:49:26.867689 | orchestrator | 2026-03-11 00:49:26.867693 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:49:26.867697 | orchestrator | Wednesday 11 March 2026 00:49:24 +0000 (0:00:09.728) 0:00:32.879 ******* 2026-03-11 00:49:26.867701 | orchestrator | =============================================================================== 2026-03-11 00:49:26.867704 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.73s 2026-03-11 00:49:26.867708 | orchestrator | redis : Restart redis container ----------------------------------------- 9.25s 2026-03-11 00:49:26.867712 | orchestrator | redis : Copying over default config.json files -------------------------- 3.24s 2026-03-11 00:49:26.867716 | orchestrator | redis : Copying over redis config files --------------------------------- 2.87s 2026-03-11 00:49:26.867720 | orchestrator | redis : Check redis containers ------------------------------------------ 2.52s 2026-03-11 00:49:26.867727 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.87s 2026-03-11 00:49:26.867731 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2026-03-11 00:49:26.867734 | orchestrator | redis : include_tasks --------------------------------------------------- 0.87s 2026-03-11 00:49:26.867738 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.67s 2026-03-11 00:49:26.867742 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.50s 2026-03-11 00:49:26.867745 | orchestrator | 2026-03-11 00:49:26 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:26.867750 | orchestrator | 2026-03-11 00:49:26 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:26.868656 | orchestrator | 2026-03-11 00:49:26 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:26.868688 | orchestrator | 2026-03-11 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:29.907580 | orchestrator | 2026-03-11 00:49:29 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:29.907675 | orchestrator | 2026-03-11 00:49:29 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:29.907686 | orchestrator | 2026-03-11 00:49:29 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:29.907712 | orchestrator | 2026-03-11 00:49:29 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:29.907719 | orchestrator | 2026-03-11 00:49:29 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:29.907725 | orchestrator | 2026-03-11 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:32.944903 | orchestrator | 2026-03-11 00:49:32 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:32.944991 | orchestrator | 2026-03-11 00:49:32 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:32.945002 | orchestrator | 2026-03-11 00:49:32 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:32.947723 | orchestrator | 2026-03-11 00:49:32 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:32.947820 | orchestrator | 2026-03-11 00:49:32 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:32.947949 | orchestrator | 2026-03-11 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:35.997288 | orchestrator | 2026-03-11 00:49:35 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:35.998072 | orchestrator | 2026-03-11 00:49:35 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:35.999674 | orchestrator | 2026-03-11 00:49:36 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:36.000703 | orchestrator | 2026-03-11 00:49:36 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:36.001667 | orchestrator | 2026-03-11 00:49:36 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:36.002113 | orchestrator | 2026-03-11 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:39.043973 | orchestrator | 2026-03-11 00:49:39 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:39.045354 | orchestrator | 2026-03-11 00:49:39 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:39.047299 | orchestrator | 2026-03-11 00:49:39 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:39.048589 | orchestrator | 2026-03-11 00:49:39 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:39.050223 | orchestrator | 2026-03-11 00:49:39 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:39.050277 | orchestrator | 2026-03-11 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:42.077273 | orchestrator | 2026-03-11 00:49:42 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:42.077335 | orchestrator | 2026-03-11 00:49:42 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:42.079024 | orchestrator | 2026-03-11 00:49:42 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:42.079121 | orchestrator | 2026-03-11 00:49:42 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:42.079863 | orchestrator | 2026-03-11 00:49:42 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:42.079900 | orchestrator | 2026-03-11 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:45.123667 | orchestrator | 2026-03-11 00:49:45 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:45.123975 | orchestrator | 2026-03-11 00:49:45 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:45.124871 | orchestrator | 2026-03-11 00:49:45 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:45.126624 | orchestrator | 2026-03-11 00:49:45 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:45.127630 | orchestrator | 2026-03-11 00:49:45 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:45.127895 | orchestrator | 2026-03-11 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:48.179439 | orchestrator | 2026-03-11 00:49:48 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:48.179907 | orchestrator | 2026-03-11 00:49:48 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:48.180861 | orchestrator | 2026-03-11 00:49:48 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:48.181425 | orchestrator | 2026-03-11 00:49:48 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:48.182140 | orchestrator | 2026-03-11 00:49:48 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:48.182160 | orchestrator | 2026-03-11 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:51.205254 | orchestrator | 2026-03-11 00:49:51 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:51.205929 | orchestrator | 2026-03-11 00:49:51 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:51.206703 | orchestrator | 2026-03-11 00:49:51 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:51.207497 | orchestrator | 2026-03-11 00:49:51 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:51.208458 | orchestrator | 2026-03-11 00:49:51 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:51.208501 | orchestrator | 2026-03-11 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:54.240583 | orchestrator | 2026-03-11 00:49:54 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:54.241649 | orchestrator | 2026-03-11 00:49:54 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:54.244799 | orchestrator | 2026-03-11 00:49:54 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:54.247108 | orchestrator | 2026-03-11 00:49:54 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:54.249310 | orchestrator | 2026-03-11 00:49:54 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:54.249378 | orchestrator | 2026-03-11 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:57.301695 | orchestrator | 2026-03-11 00:49:57 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:49:57.305614 | orchestrator | 2026-03-11 00:49:57 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:49:57.306953 | orchestrator | 2026-03-11 00:49:57 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:49:57.310549 | orchestrator | 2026-03-11 00:49:57 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:49:57.311669 | orchestrator | 2026-03-11 00:49:57 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state STARTED 2026-03-11 00:49:57.311944 | orchestrator | 2026-03-11 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:00.341284 | orchestrator | 2026-03-11 00:50:00 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:00.341328 | orchestrator | 2026-03-11 00:50:00 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:00.341335 | orchestrator | 2026-03-11 00:50:00 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:50:00.341341 | orchestrator | 2026-03-11 00:50:00 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:00.341346 | orchestrator | 2026-03-11 00:50:00 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:00.341350 | orchestrator | 2026-03-11 00:50:00 | INFO  | Task 0d9e223d-90bc-427e-abf1-f4104233a789 is in state SUCCESS 2026-03-11 00:50:00.341356 | orchestrator | 2026-03-11 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:00.342328 | orchestrator | 2026-03-11 00:50:00.342357 | orchestrator | 2026-03-11 00:50:00.342362 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:50:00.342366 | orchestrator | 2026-03-11 00:50:00.342369 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:50:00.342372 | orchestrator | Wednesday 11 March 2026 00:48:52 +0000 (0:00:00.371) 0:00:00.371 ******* 2026-03-11 00:50:00.342376 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:00.342379 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:00.342383 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:00.342386 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:50:00.342389 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:50:00.342392 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:50:00.342396 | orchestrator | 2026-03-11 00:50:00.342399 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:50:00.342402 | orchestrator | Wednesday 11 March 2026 00:48:53 +0000 (0:00:01.167) 0:00:01.538 ******* 2026-03-11 00:50:00.342405 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-11 00:50:00.342409 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-11 00:50:00.342415 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-11 00:50:00.342418 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-11 00:50:00.342430 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-11 00:50:00.342434 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-11 00:50:00.342437 | orchestrator | 2026-03-11 00:50:00.342440 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-11 00:50:00.342443 | orchestrator | 2026-03-11 00:50:00.342446 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-11 00:50:00.342449 | orchestrator | Wednesday 11 March 2026 00:48:54 +0000 (0:00:00.957) 0:00:02.495 ******* 2026-03-11 00:50:00.342453 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:50:00.342456 | orchestrator | 2026-03-11 00:50:00.342460 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-11 00:50:00.342463 | orchestrator | Wednesday 11 March 2026 00:48:55 +0000 (0:00:01.679) 0:00:04.175 ******* 2026-03-11 00:50:00.342466 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-11 00:50:00.342469 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-11 00:50:00.342472 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-11 00:50:00.342475 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-11 00:50:00.342479 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-11 00:50:00.342482 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-11 00:50:00.342485 | orchestrator | 2026-03-11 00:50:00.342488 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-11 00:50:00.342491 | orchestrator | Wednesday 11 March 2026 00:48:58 +0000 (0:00:02.166) 0:00:06.342 ******* 2026-03-11 00:50:00.342494 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-11 00:50:00.342497 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-11 00:50:00.342500 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-11 00:50:00.342504 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-11 00:50:00.342507 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-11 00:50:00.342510 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-11 00:50:00.342513 | orchestrator | 2026-03-11 00:50:00.342516 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-11 00:50:00.342519 | orchestrator | Wednesday 11 March 2026 00:49:00 +0000 (0:00:02.084) 0:00:08.427 ******* 2026-03-11 00:50:00.342522 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-11 00:50:00.342525 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:00.342529 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-11 00:50:00.342532 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:00.342535 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-11 00:50:00.342538 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:00.342541 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-11 00:50:00.342544 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:00.342547 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-11 00:50:00.342551 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:00.342554 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-11 00:50:00.342557 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:00.342560 | orchestrator | 2026-03-11 00:50:00.342563 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-11 00:50:00.342566 | orchestrator | Wednesday 11 March 2026 00:49:01 +0000 (0:00:01.492) 0:00:09.919 ******* 2026-03-11 00:50:00.342569 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:00.342572 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:00.342575 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:00.342579 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:00.342584 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:00.342587 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:00.342590 | orchestrator | 2026-03-11 00:50:00.342593 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-11 00:50:00.342596 | orchestrator | Wednesday 11 March 2026 00:49:02 +0000 (0:00:00.798) 0:00:10.717 ******* 2026-03-11 00:50:00.342606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342620 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342638 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342647 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342659 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342665 | orchestrator | 2026-03-11 00:50:00.342670 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-11 00:50:00.342675 | orchestrator | Wednesday 11 March 2026 00:49:04 +0000 (0:00:02.295) 0:00:13.013 ******* 2026-03-11 00:50:00.342683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342725 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342741 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342759 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342765 | orchestrator | 2026-03-11 00:50:00.342770 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-11 00:50:00.342775 | orchestrator | Wednesday 11 March 2026 00:49:08 +0000 (0:00:04.025) 0:00:17.039 ******* 2026-03-11 00:50:00.342780 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:00.342785 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:00.342790 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:00.342794 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:00.342799 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:00.342804 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:00.342809 | orchestrator | 2026-03-11 00:50:00.342815 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-11 00:50:00.342820 | orchestrator | Wednesday 11 March 2026 00:49:10 +0000 (0:00:01.186) 0:00:18.226 ******* 2026-03-11 00:50:00.342825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342853 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342861 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342876 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342880 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342905 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:50:00.342908 | orchestrator | 2026-03-11 00:50:00.342911 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-11 00:50:00.342914 | orchestrator | Wednesday 11 March 2026 00:49:13 +0000 (0:00:03.066) 0:00:21.292 ******* 2026-03-11 00:50:00.342920 | orchestrator | 2026-03-11 00:50:00.342923 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-11 00:50:00.342926 | orchestrator | Wednesday 11 March 2026 00:49:13 +0000 (0:00:00.299) 0:00:21.591 ******* 2026-03-11 00:50:00.342929 | orchestrator | 2026-03-11 00:50:00.342932 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-11 00:50:00.342935 | orchestrator | Wednesday 11 March 2026 00:49:13 +0000 (0:00:00.292) 0:00:21.883 ******* 2026-03-11 00:50:00.342938 | orchestrator | 2026-03-11 00:50:00.342942 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-11 00:50:00.342946 | orchestrator | Wednesday 11 March 2026 00:49:13 +0000 (0:00:00.214) 0:00:22.098 ******* 2026-03-11 00:50:00.342949 | orchestrator | 2026-03-11 00:50:00.342953 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-11 00:50:00.342956 | orchestrator | Wednesday 11 March 2026 00:49:14 +0000 (0:00:00.120) 0:00:22.218 ******* 2026-03-11 00:50:00.342960 | orchestrator | 2026-03-11 00:50:00.342963 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-11 00:50:00.342967 | orchestrator | Wednesday 11 March 2026 00:49:14 +0000 (0:00:00.103) 0:00:22.322 ******* 2026-03-11 00:50:00.342970 | orchestrator | 2026-03-11 00:50:00.342974 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-11 00:50:00.342979 | orchestrator | Wednesday 11 March 2026 00:49:14 +0000 (0:00:00.158) 0:00:22.480 ******* 2026-03-11 00:50:00.342984 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:00.342993 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:00.342998 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:00.343002 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:00.343007 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:00.343013 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:00.343018 | orchestrator | 2026-03-11 00:50:00.343023 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-11 00:50:00.343028 | orchestrator | Wednesday 11 March 2026 00:49:24 +0000 (0:00:10.577) 0:00:33.058 ******* 2026-03-11 00:50:00.343033 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:00.343039 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:00.343045 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:00.343051 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:50:00.343056 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:50:00.343061 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:50:00.343065 | orchestrator | 2026-03-11 00:50:00.343069 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-11 00:50:00.343073 | orchestrator | Wednesday 11 March 2026 00:49:27 +0000 (0:00:02.449) 0:00:35.507 ******* 2026-03-11 00:50:00.343076 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:00.343080 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:00.343083 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:00.343087 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:00.343091 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:00.343094 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:00.343098 | orchestrator | 2026-03-11 00:50:00.343102 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-11 00:50:00.343105 | orchestrator | Wednesday 11 March 2026 00:49:37 +0000 (0:00:09.876) 0:00:45.384 ******* 2026-03-11 00:50:00.343112 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-11 00:50:00.343116 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-11 00:50:00.343120 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-11 00:50:00.343123 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-11 00:50:00.343127 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-11 00:50:00.343134 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-11 00:50:00.343137 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-11 00:50:00.343143 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-11 00:50:00.343147 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-11 00:50:00.343150 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-11 00:50:00.343154 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-11 00:50:00.343158 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-11 00:50:00.343161 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-11 00:50:00.343165 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-11 00:50:00.343168 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-11 00:50:00.343172 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-11 00:50:00.343175 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-11 00:50:00.343179 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-11 00:50:00.343183 | orchestrator | 2026-03-11 00:50:00.343186 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-11 00:50:00.343190 | orchestrator | Wednesday 11 March 2026 00:49:44 +0000 (0:00:07.544) 0:00:52.929 ******* 2026-03-11 00:50:00.343194 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-11 00:50:00.343198 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:00.343201 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-11 00:50:00.343205 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:00.343208 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-11 00:50:00.343212 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:00.343215 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-11 00:50:00.343219 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-11 00:50:00.343223 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-11 00:50:00.343226 | orchestrator | 2026-03-11 00:50:00.343230 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-11 00:50:00.343233 | orchestrator | Wednesday 11 March 2026 00:49:46 +0000 (0:00:02.076) 0:00:55.006 ******* 2026-03-11 00:50:00.343237 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-11 00:50:00.343241 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:00.343245 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-11 00:50:00.343248 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:00.343252 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-11 00:50:00.343256 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:00.343259 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-11 00:50:00.343263 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-11 00:50:00.343266 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-11 00:50:00.343270 | orchestrator | 2026-03-11 00:50:00.343274 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-11 00:50:00.343279 | orchestrator | Wednesday 11 March 2026 00:49:49 +0000 (0:00:02.873) 0:00:57.880 ******* 2026-03-11 00:50:00.343283 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:00.343286 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:00.343290 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:00.343294 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:00.343297 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:00.343301 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:00.343305 | orchestrator | 2026-03-11 00:50:00.343308 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:50:00.343312 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-11 00:50:00.343319 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-11 00:50:00.343323 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-11 00:50:00.343327 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 00:50:00.343330 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 00:50:00.343334 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 00:50:00.343337 | orchestrator | 2026-03-11 00:50:00.343340 | orchestrator | 2026-03-11 00:50:00.343346 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:50:00.343349 | orchestrator | Wednesday 11 March 2026 00:49:57 +0000 (0:00:07.990) 0:01:05.870 ******* 2026-03-11 00:50:00.343352 | orchestrator | =============================================================================== 2026-03-11 00:50:00.343355 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.87s 2026-03-11 00:50:00.343358 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.58s 2026-03-11 00:50:00.343361 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.54s 2026-03-11 00:50:00.343364 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.03s 2026-03-11 00:50:00.343367 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.07s 2026-03-11 00:50:00.343370 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 2.87s 2026-03-11 00:50:00.343373 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.45s 2026-03-11 00:50:00.343376 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.30s 2026-03-11 00:50:00.343379 | orchestrator | module-load : Load modules ---------------------------------------------- 2.17s 2026-03-11 00:50:00.343382 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.08s 2026-03-11 00:50:00.343385 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.08s 2026-03-11 00:50:00.343388 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.68s 2026-03-11 00:50:00.343392 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.49s 2026-03-11 00:50:00.343395 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.19s 2026-03-11 00:50:00.343398 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.19s 2026-03-11 00:50:00.343401 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.17s 2026-03-11 00:50:00.343404 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.96s 2026-03-11 00:50:00.343410 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.80s 2026-03-11 00:50:03.367539 | orchestrator | 2026-03-11 00:50:03 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:03.367992 | orchestrator | 2026-03-11 00:50:03 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:03.368702 | orchestrator | 2026-03-11 00:50:03 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:50:03.373982 | orchestrator | 2026-03-11 00:50:03 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:03.374051 | orchestrator | 2026-03-11 00:50:03 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:03.374329 | orchestrator | 2026-03-11 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:06.438431 | orchestrator | 2026-03-11 00:50:06 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:06.438493 | orchestrator | 2026-03-11 00:50:06 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:06.438502 | orchestrator | 2026-03-11 00:50:06 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:50:06.438508 | orchestrator | 2026-03-11 00:50:06 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:06.438514 | orchestrator | 2026-03-11 00:50:06 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:06.438520 | orchestrator | 2026-03-11 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:09.433703 | orchestrator | 2026-03-11 00:50:09 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:09.436016 | orchestrator | 2026-03-11 00:50:09 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:09.437720 | orchestrator | 2026-03-11 00:50:09 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:50:09.439587 | orchestrator | 2026-03-11 00:50:09 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:09.441911 | orchestrator | 2026-03-11 00:50:09 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:09.442002 | orchestrator | 2026-03-11 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:12.466057 | orchestrator | 2026-03-11 00:50:12 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:12.466604 | orchestrator | 2026-03-11 00:50:12 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:12.467591 | orchestrator | 2026-03-11 00:50:12 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:50:12.468204 | orchestrator | 2026-03-11 00:50:12 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:12.468790 | orchestrator | 2026-03-11 00:50:12 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:12.468831 | orchestrator | 2026-03-11 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:15.506558 | orchestrator | 2026-03-11 00:50:15 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:15.507817 | orchestrator | 2026-03-11 00:50:15 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:15.509596 | orchestrator | 2026-03-11 00:50:15 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:50:15.511444 | orchestrator | 2026-03-11 00:50:15 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:15.512580 | orchestrator | 2026-03-11 00:50:15 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:15.512608 | orchestrator | 2026-03-11 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:18.545024 | orchestrator | 2026-03-11 00:50:18 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:18.545254 | orchestrator | 2026-03-11 00:50:18 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:18.546279 | orchestrator | 2026-03-11 00:50:18 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:50:18.547117 | orchestrator | 2026-03-11 00:50:18 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:18.549790 | orchestrator | 2026-03-11 00:50:18 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:18.549836 | orchestrator | 2026-03-11 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:21.605604 | orchestrator | 2026-03-11 00:50:21 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:21.605663 | orchestrator | 2026-03-11 00:50:21 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:21.605673 | orchestrator | 2026-03-11 00:50:21 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:50:21.605680 | orchestrator | 2026-03-11 00:50:21 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:21.605686 | orchestrator | 2026-03-11 00:50:21 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:21.605693 | orchestrator | 2026-03-11 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:24.939441 | orchestrator | 2026-03-11 00:50:24 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:24.939502 | orchestrator | 2026-03-11 00:50:24 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:24.939511 | orchestrator | 2026-03-11 00:50:24 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:50:24.939518 | orchestrator | 2026-03-11 00:50:24 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:24.939525 | orchestrator | 2026-03-11 00:50:24 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:24.939531 | orchestrator | 2026-03-11 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:28.279428 | orchestrator | 2026-03-11 00:50:28 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:28.279997 | orchestrator | 2026-03-11 00:50:28 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:28.280647 | orchestrator | 2026-03-11 00:50:28 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:50:28.281473 | orchestrator | 2026-03-11 00:50:28 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:28.282178 | orchestrator | 2026-03-11 00:50:28 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:28.282207 | orchestrator | 2026-03-11 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:31.321070 | orchestrator | 2026-03-11 00:50:31 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:31.321411 | orchestrator | 2026-03-11 00:50:31 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:31.322150 | orchestrator | 2026-03-11 00:50:31 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state STARTED 2026-03-11 00:50:31.322617 | orchestrator | 2026-03-11 00:50:31 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:31.323354 | orchestrator | 2026-03-11 00:50:31 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:31.323378 | orchestrator | 2026-03-11 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:34.355592 | orchestrator | 2026-03-11 00:50:34 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:34.355938 | orchestrator | 2026-03-11 00:50:34 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:34.357394 | orchestrator | 2026-03-11 00:50:34 | INFO  | Task c8024a85-f81c-4a2e-bd6b-7356ce6b11f6 is in state SUCCESS 2026-03-11 00:50:34.358599 | orchestrator | 2026-03-11 00:50:34.358626 | orchestrator | 2026-03-11 00:50:34.358631 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-11 00:50:34.358636 | orchestrator | 2026-03-11 00:50:34.358643 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-11 00:50:34.358652 | orchestrator | Wednesday 11 March 2026 00:46:19 +0000 (0:00:00.171) 0:00:00.171 ******* 2026-03-11 00:50:34.358661 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:50:34.358668 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:50:34.358675 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:50:34.358681 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.358689 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.358694 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.358698 | orchestrator | 2026-03-11 00:50:34.358702 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-11 00:50:34.358707 | orchestrator | Wednesday 11 March 2026 00:46:20 +0000 (0:00:00.627) 0:00:00.799 ******* 2026-03-11 00:50:34.358710 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.358715 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.358719 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.358723 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.358727 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.358731 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.358734 | orchestrator | 2026-03-11 00:50:34.358738 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-11 00:50:34.358742 | orchestrator | Wednesday 11 March 2026 00:46:20 +0000 (0:00:00.624) 0:00:01.423 ******* 2026-03-11 00:50:34.358746 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.358750 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.358753 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.358757 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.358761 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.358765 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.358769 | orchestrator | 2026-03-11 00:50:34.358772 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-11 00:50:34.358776 | orchestrator | Wednesday 11 March 2026 00:46:21 +0000 (0:00:00.682) 0:00:02.105 ******* 2026-03-11 00:50:34.358780 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:34.358784 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:34.358788 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:34.358791 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:34.358795 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.358799 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:34.358803 | orchestrator | 2026-03-11 00:50:34.358807 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-11 00:50:34.358811 | orchestrator | Wednesday 11 March 2026 00:46:23 +0000 (0:00:01.888) 0:00:03.994 ******* 2026-03-11 00:50:34.358814 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:34.358828 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:34.358832 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.358836 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:34.358839 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:34.358843 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:34.358850 | orchestrator | 2026-03-11 00:50:34.358856 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-11 00:50:34.358862 | orchestrator | Wednesday 11 March 2026 00:46:25 +0000 (0:00:01.659) 0:00:05.654 ******* 2026-03-11 00:50:34.358868 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:34.358903 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:34.358910 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:34.358916 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.358921 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:34.358927 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:34.358933 | orchestrator | 2026-03-11 00:50:34.358939 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-11 00:50:34.358943 | orchestrator | Wednesday 11 March 2026 00:46:26 +0000 (0:00:01.114) 0:00:06.769 ******* 2026-03-11 00:50:34.358947 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.358951 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.358954 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.358958 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.358962 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.358966 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.358969 | orchestrator | 2026-03-11 00:50:34.358973 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-11 00:50:34.358977 | orchestrator | Wednesday 11 March 2026 00:46:26 +0000 (0:00:00.686) 0:00:07.455 ******* 2026-03-11 00:50:34.358981 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.358984 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.358988 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.358992 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.358996 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.358999 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.359003 | orchestrator | 2026-03-11 00:50:34.359007 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-11 00:50:34.359011 | orchestrator | Wednesday 11 March 2026 00:46:27 +0000 (0:00:00.772) 0:00:08.227 ******* 2026-03-11 00:50:34.359018 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 00:50:34.359022 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 00:50:34.359026 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.359029 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 00:50:34.359033 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 00:50:34.359037 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.359041 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 00:50:34.359044 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 00:50:34.359048 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 00:50:34.359052 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 00:50:34.359062 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.359066 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 00:50:34.359070 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 00:50:34.359074 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.359077 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.359081 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 00:50:34.359089 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 00:50:34.359092 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.359096 | orchestrator | 2026-03-11 00:50:34.359100 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-11 00:50:34.359104 | orchestrator | Wednesday 11 March 2026 00:46:28 +0000 (0:00:00.816) 0:00:09.044 ******* 2026-03-11 00:50:34.359107 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.359111 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.359115 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.359119 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.359122 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.359126 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.359130 | orchestrator | 2026-03-11 00:50:34.359133 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-11 00:50:34.359138 | orchestrator | Wednesday 11 March 2026 00:46:29 +0000 (0:00:01.132) 0:00:10.176 ******* 2026-03-11 00:50:34.359142 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:50:34.359145 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:50:34.359149 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:50:34.359153 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.359156 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.359160 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.359164 | orchestrator | 2026-03-11 00:50:34.359167 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-11 00:50:34.359171 | orchestrator | Wednesday 11 March 2026 00:46:30 +0000 (0:00:01.045) 0:00:11.222 ******* 2026-03-11 00:50:34.359175 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:34.359179 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.359182 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:34.359186 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:34.359190 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:34.359193 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:34.359197 | orchestrator | 2026-03-11 00:50:34.359201 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-11 00:50:34.359205 | orchestrator | Wednesday 11 March 2026 00:46:36 +0000 (0:00:05.773) 0:00:16.996 ******* 2026-03-11 00:50:34.359208 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.359212 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.359216 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.359266 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.359272 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.359276 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.359280 | orchestrator | 2026-03-11 00:50:34.359285 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-11 00:50:34.359289 | orchestrator | Wednesday 11 March 2026 00:46:37 +0000 (0:00:00.909) 0:00:17.905 ******* 2026-03-11 00:50:34.359293 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.359298 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.359302 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.359306 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.359310 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.359314 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.359318 | orchestrator | 2026-03-11 00:50:34.359322 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-11 00:50:34.359327 | orchestrator | Wednesday 11 March 2026 00:46:39 +0000 (0:00:02.178) 0:00:20.084 ******* 2026-03-11 00:50:34.359331 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.359336 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.359340 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.359344 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.359349 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.359356 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.359360 | orchestrator | 2026-03-11 00:50:34.359365 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-11 00:50:34.359372 | orchestrator | Wednesday 11 March 2026 00:46:41 +0000 (0:00:01.588) 0:00:21.672 ******* 2026-03-11 00:50:34.359382 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-11 00:50:34.359389 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-11 00:50:34.359395 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.359401 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-11 00:50:34.359407 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-11 00:50:34.359413 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.359420 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-11 00:50:34.359426 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-11 00:50:34.359432 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.359437 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-11 00:50:34.359441 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-11 00:50:34.359446 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.359450 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-11 00:50:34.359455 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-11 00:50:34.359459 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.359464 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-11 00:50:34.359468 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-11 00:50:34.359472 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.359476 | orchestrator | 2026-03-11 00:50:34.359480 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-11 00:50:34.359487 | orchestrator | Wednesday 11 March 2026 00:46:43 +0000 (0:00:02.294) 0:00:23.967 ******* 2026-03-11 00:50:34.359491 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.359495 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.359499 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.359502 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.359506 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.359510 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.359513 | orchestrator | 2026-03-11 00:50:34.359520 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-11 00:50:34.359527 | orchestrator | Wednesday 11 March 2026 00:46:44 +0000 (0:00:01.399) 0:00:25.367 ******* 2026-03-11 00:50:34.359536 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.359541 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.359547 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.359553 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.359559 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.359565 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.359573 | orchestrator | 2026-03-11 00:50:34.359577 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-11 00:50:34.359581 | orchestrator | 2026-03-11 00:50:34.359584 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-11 00:50:34.359588 | orchestrator | Wednesday 11 March 2026 00:46:45 +0000 (0:00:01.192) 0:00:26.559 ******* 2026-03-11 00:50:34.359592 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.359596 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.359599 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.359603 | orchestrator | 2026-03-11 00:50:34.359607 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-11 00:50:34.359611 | orchestrator | Wednesday 11 March 2026 00:46:47 +0000 (0:00:01.229) 0:00:27.788 ******* 2026-03-11 00:50:34.359614 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.359618 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.359622 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.359629 | orchestrator | 2026-03-11 00:50:34.359633 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-11 00:50:34.359637 | orchestrator | Wednesday 11 March 2026 00:46:48 +0000 (0:00:01.084) 0:00:28.873 ******* 2026-03-11 00:50:34.359641 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.359644 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.359648 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.359652 | orchestrator | 2026-03-11 00:50:34.359656 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-11 00:50:34.360031 | orchestrator | Wednesday 11 March 2026 00:46:49 +0000 (0:00:00.793) 0:00:29.667 ******* 2026-03-11 00:50:34.360046 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.360050 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.360054 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.360058 | orchestrator | 2026-03-11 00:50:34.360061 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-11 00:50:34.360066 | orchestrator | Wednesday 11 March 2026 00:46:49 +0000 (0:00:00.734) 0:00:30.401 ******* 2026-03-11 00:50:34.360069 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.360073 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.360077 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.360081 | orchestrator | 2026-03-11 00:50:34.360084 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-11 00:50:34.360088 | orchestrator | Wednesday 11 March 2026 00:46:50 +0000 (0:00:00.419) 0:00:30.821 ******* 2026-03-11 00:50:34.360092 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.360096 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:34.360100 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:34.360103 | orchestrator | 2026-03-11 00:50:34.360107 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-11 00:50:34.360111 | orchestrator | Wednesday 11 March 2026 00:46:51 +0000 (0:00:01.526) 0:00:32.347 ******* 2026-03-11 00:50:34.360115 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:34.360118 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.360122 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:34.360126 | orchestrator | 2026-03-11 00:50:34.360130 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-11 00:50:34.360133 | orchestrator | Wednesday 11 March 2026 00:46:53 +0000 (0:00:01.846) 0:00:34.194 ******* 2026-03-11 00:50:34.360137 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:50:34.360141 | orchestrator | 2026-03-11 00:50:34.360145 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-11 00:50:34.360148 | orchestrator | Wednesday 11 March 2026 00:46:54 +0000 (0:00:00.710) 0:00:34.905 ******* 2026-03-11 00:50:34.360152 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.360156 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.360160 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.360163 | orchestrator | 2026-03-11 00:50:34.360167 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-11 00:50:34.360171 | orchestrator | Wednesday 11 March 2026 00:46:57 +0000 (0:00:03.078) 0:00:37.984 ******* 2026-03-11 00:50:34.360175 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.360178 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.360182 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.360186 | orchestrator | 2026-03-11 00:50:34.360190 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-11 00:50:34.360193 | orchestrator | Wednesday 11 March 2026 00:46:57 +0000 (0:00:00.472) 0:00:38.456 ******* 2026-03-11 00:50:34.360197 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.360201 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.360204 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.360208 | orchestrator | 2026-03-11 00:50:34.360212 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-11 00:50:34.360220 | orchestrator | Wednesday 11 March 2026 00:46:58 +0000 (0:00:00.933) 0:00:39.390 ******* 2026-03-11 00:50:34.360224 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.360228 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.360231 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.360235 | orchestrator | 2026-03-11 00:50:34.360239 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-11 00:50:34.360248 | orchestrator | Wednesday 11 March 2026 00:47:00 +0000 (0:00:01.757) 0:00:41.148 ******* 2026-03-11 00:50:34.360252 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.360256 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.360259 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.360263 | orchestrator | 2026-03-11 00:50:34.360267 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-11 00:50:34.360271 | orchestrator | Wednesday 11 March 2026 00:47:01 +0000 (0:00:00.723) 0:00:41.871 ******* 2026-03-11 00:50:34.360274 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.360278 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.360282 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.360286 | orchestrator | 2026-03-11 00:50:34.360289 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-11 00:50:34.360293 | orchestrator | Wednesday 11 March 2026 00:47:01 +0000 (0:00:00.370) 0:00:42.241 ******* 2026-03-11 00:50:34.360297 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.360301 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:34.360304 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:34.360308 | orchestrator | 2026-03-11 00:50:34.360312 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-11 00:50:34.360315 | orchestrator | Wednesday 11 March 2026 00:47:03 +0000 (0:00:02.107) 0:00:44.349 ******* 2026-03-11 00:50:34.360319 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.360325 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.360329 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.360332 | orchestrator | 2026-03-11 00:50:34.360336 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-11 00:50:34.360340 | orchestrator | Wednesday 11 March 2026 00:47:06 +0000 (0:00:02.282) 0:00:46.632 ******* 2026-03-11 00:50:34.360344 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.360347 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.360351 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.360355 | orchestrator | 2026-03-11 00:50:34.360359 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-11 00:50:34.360363 | orchestrator | Wednesday 11 March 2026 00:47:06 +0000 (0:00:00.879) 0:00:47.511 ******* 2026-03-11 00:50:34.360369 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-11 00:50:34.360379 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-11 00:50:34.360387 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-11 00:50:34.360393 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-11 00:50:34.360399 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-11 00:50:34.360405 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-11 00:50:34.360411 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-11 00:50:34.360417 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-11 00:50:34.360426 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-11 00:50:34.360432 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-11 00:50:34.360439 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-11 00:50:34.360446 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-11 00:50:34.360452 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.360458 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.360464 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.360470 | orchestrator | 2026-03-11 00:50:34.360477 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-11 00:50:34.360483 | orchestrator | Wednesday 11 March 2026 00:47:50 +0000 (0:00:43.198) 0:01:30.710 ******* 2026-03-11 00:50:34.360489 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.360496 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.360502 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.360507 | orchestrator | 2026-03-11 00:50:34.360511 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-11 00:50:34.360515 | orchestrator | Wednesday 11 March 2026 00:47:50 +0000 (0:00:00.297) 0:01:31.007 ******* 2026-03-11 00:50:34.360519 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.360523 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:34.360527 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:34.360532 | orchestrator | 2026-03-11 00:50:34.360536 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-11 00:50:34.360540 | orchestrator | Wednesday 11 March 2026 00:47:51 +0000 (0:00:01.037) 0:01:32.045 ******* 2026-03-11 00:50:34.360545 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.360549 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:34.360553 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:34.360557 | orchestrator | 2026-03-11 00:50:34.360565 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-11 00:50:34.360569 | orchestrator | Wednesday 11 March 2026 00:47:52 +0000 (0:00:01.408) 0:01:33.454 ******* 2026-03-11 00:50:34.360574 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:34.360578 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.360582 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:34.360586 | orchestrator | 2026-03-11 00:50:34.360591 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-11 00:50:34.360595 | orchestrator | Wednesday 11 March 2026 00:48:17 +0000 (0:00:25.016) 0:01:58.470 ******* 2026-03-11 00:50:34.360599 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.360604 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.360608 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.360615 | orchestrator | 2026-03-11 00:50:34.360621 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-11 00:50:34.360627 | orchestrator | Wednesday 11 March 2026 00:48:18 +0000 (0:00:00.574) 0:01:59.044 ******* 2026-03-11 00:50:34.360633 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.360639 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.360645 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.360651 | orchestrator | 2026-03-11 00:50:34.360658 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-11 00:50:34.360664 | orchestrator | Wednesday 11 March 2026 00:48:18 +0000 (0:00:00.517) 0:01:59.561 ******* 2026-03-11 00:50:34.360674 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.360681 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:34.360688 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:34.360694 | orchestrator | 2026-03-11 00:50:34.360705 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-11 00:50:34.360712 | orchestrator | Wednesday 11 March 2026 00:48:19 +0000 (0:00:00.533) 0:02:00.095 ******* 2026-03-11 00:50:34.360719 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.360726 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.360733 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.360740 | orchestrator | 2026-03-11 00:50:34.360746 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-11 00:50:34.360753 | orchestrator | Wednesday 11 March 2026 00:48:20 +0000 (0:00:00.700) 0:02:00.795 ******* 2026-03-11 00:50:34.360759 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.360766 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.360771 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.360777 | orchestrator | 2026-03-11 00:50:34.360784 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-11 00:50:34.360790 | orchestrator | Wednesday 11 March 2026 00:48:20 +0000 (0:00:00.318) 0:02:01.114 ******* 2026-03-11 00:50:34.360796 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.360803 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:34.360810 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:34.360816 | orchestrator | 2026-03-11 00:50:34.360823 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-11 00:50:34.360830 | orchestrator | Wednesday 11 March 2026 00:48:21 +0000 (0:00:00.623) 0:02:01.737 ******* 2026-03-11 00:50:34.360837 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.360844 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:34.360851 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:34.360861 | orchestrator | 2026-03-11 00:50:34.360868 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-11 00:50:34.360892 | orchestrator | Wednesday 11 March 2026 00:48:21 +0000 (0:00:00.539) 0:02:02.277 ******* 2026-03-11 00:50:34.360900 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.360908 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:34.360914 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:34.360921 | orchestrator | 2026-03-11 00:50:34.360926 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-11 00:50:34.360930 | orchestrator | Wednesday 11 March 2026 00:48:22 +0000 (0:00:00.935) 0:02:03.212 ******* 2026-03-11 00:50:34.360935 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:34.360939 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:34.360943 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:34.360948 | orchestrator | 2026-03-11 00:50:34.360952 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-11 00:50:34.360956 | orchestrator | Wednesday 11 March 2026 00:48:23 +0000 (0:00:00.769) 0:02:03.982 ******* 2026-03-11 00:50:34.360960 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.360963 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.360967 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.360971 | orchestrator | 2026-03-11 00:50:34.360974 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-11 00:50:34.360979 | orchestrator | Wednesday 11 March 2026 00:48:23 +0000 (0:00:00.254) 0:02:04.236 ******* 2026-03-11 00:50:34.360982 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.360986 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.360990 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.360994 | orchestrator | 2026-03-11 00:50:34.360997 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-11 00:50:34.361001 | orchestrator | Wednesday 11 March 2026 00:48:23 +0000 (0:00:00.275) 0:02:04.512 ******* 2026-03-11 00:50:34.361005 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.361009 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.361012 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.361016 | orchestrator | 2026-03-11 00:50:34.361020 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-11 00:50:34.361028 | orchestrator | Wednesday 11 March 2026 00:48:24 +0000 (0:00:00.748) 0:02:05.260 ******* 2026-03-11 00:50:34.361032 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.361036 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.361040 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.361043 | orchestrator | 2026-03-11 00:50:34.361047 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-11 00:50:34.361051 | orchestrator | Wednesday 11 March 2026 00:48:25 +0000 (0:00:00.585) 0:02:05.846 ******* 2026-03-11 00:50:34.361055 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-11 00:50:34.361063 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-11 00:50:34.361067 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-11 00:50:34.361071 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-11 00:50:34.361075 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-11 00:50:34.361078 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-11 00:50:34.361082 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-11 00:50:34.361086 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-11 00:50:34.361089 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-11 00:50:34.361093 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-11 00:50:34.361100 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-11 00:50:34.361103 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-11 00:50:34.361107 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-11 00:50:34.361111 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-11 00:50:34.361115 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-11 00:50:34.361118 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-11 00:50:34.361122 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-11 00:50:34.361126 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-11 00:50:34.361129 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-11 00:50:34.361133 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-11 00:50:34.361137 | orchestrator | 2026-03-11 00:50:34.361141 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-11 00:50:34.361144 | orchestrator | 2026-03-11 00:50:34.361148 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-11 00:50:34.361152 | orchestrator | Wednesday 11 March 2026 00:48:28 +0000 (0:00:03.036) 0:02:08.883 ******* 2026-03-11 00:50:34.361156 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:50:34.361159 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:50:34.361163 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:50:34.361167 | orchestrator | 2026-03-11 00:50:34.361170 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-11 00:50:34.361174 | orchestrator | Wednesday 11 March 2026 00:48:28 +0000 (0:00:00.420) 0:02:09.303 ******* 2026-03-11 00:50:34.361178 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:50:34.361182 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:50:34.361188 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:50:34.361192 | orchestrator | 2026-03-11 00:50:34.361196 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-11 00:50:34.361199 | orchestrator | Wednesday 11 March 2026 00:48:29 +0000 (0:00:00.596) 0:02:09.899 ******* 2026-03-11 00:50:34.361203 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:50:34.361207 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:50:34.361211 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:50:34.361214 | orchestrator | 2026-03-11 00:50:34.361218 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-11 00:50:34.361222 | orchestrator | Wednesday 11 March 2026 00:48:29 +0000 (0:00:00.281) 0:02:10.181 ******* 2026-03-11 00:50:34.361225 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:50:34.361229 | orchestrator | 2026-03-11 00:50:34.361233 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-11 00:50:34.361237 | orchestrator | Wednesday 11 March 2026 00:48:30 +0000 (0:00:00.522) 0:02:10.704 ******* 2026-03-11 00:50:34.361240 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.361244 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.361248 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.361252 | orchestrator | 2026-03-11 00:50:34.361255 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-11 00:50:34.361259 | orchestrator | Wednesday 11 March 2026 00:48:30 +0000 (0:00:00.261) 0:02:10.965 ******* 2026-03-11 00:50:34.361263 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.361266 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.361270 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.361274 | orchestrator | 2026-03-11 00:50:34.361278 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-11 00:50:34.361281 | orchestrator | Wednesday 11 March 2026 00:48:30 +0000 (0:00:00.252) 0:02:11.218 ******* 2026-03-11 00:50:34.361285 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.361289 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.361292 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.361296 | orchestrator | 2026-03-11 00:50:34.361300 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-11 00:50:34.361304 | orchestrator | Wednesday 11 March 2026 00:48:30 +0000 (0:00:00.243) 0:02:11.462 ******* 2026-03-11 00:50:34.361307 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:34.361311 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:34.361315 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:34.361318 | orchestrator | 2026-03-11 00:50:34.361324 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-11 00:50:34.361328 | orchestrator | Wednesday 11 March 2026 00:48:31 +0000 (0:00:00.694) 0:02:12.156 ******* 2026-03-11 00:50:34.361332 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:34.361336 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:34.361340 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:34.361343 | orchestrator | 2026-03-11 00:50:34.361347 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-11 00:50:34.361351 | orchestrator | Wednesday 11 March 2026 00:48:32 +0000 (0:00:01.126) 0:02:13.282 ******* 2026-03-11 00:50:34.361355 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:34.361358 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:34.361362 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:34.361366 | orchestrator | 2026-03-11 00:50:34.361369 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-11 00:50:34.361373 | orchestrator | Wednesday 11 March 2026 00:48:33 +0000 (0:00:01.276) 0:02:14.559 ******* 2026-03-11 00:50:34.361377 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:34.361380 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:34.361384 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:34.361388 | orchestrator | 2026-03-11 00:50:34.361396 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-11 00:50:34.361400 | orchestrator | 2026-03-11 00:50:34.361405 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-11 00:50:34.361409 | orchestrator | Wednesday 11 March 2026 00:48:44 +0000 (0:00:10.078) 0:02:24.638 ******* 2026-03-11 00:50:34.361413 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:34.361417 | orchestrator | 2026-03-11 00:50:34.361420 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-11 00:50:34.361424 | orchestrator | Wednesday 11 March 2026 00:48:44 +0000 (0:00:00.644) 0:02:25.282 ******* 2026-03-11 00:50:34.361428 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:34.361432 | orchestrator | 2026-03-11 00:50:34.361435 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-11 00:50:34.361439 | orchestrator | Wednesday 11 March 2026 00:48:45 +0000 (0:00:00.386) 0:02:25.668 ******* 2026-03-11 00:50:34.361443 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-11 00:50:34.361446 | orchestrator | 2026-03-11 00:50:34.361450 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-11 00:50:34.361454 | orchestrator | Wednesday 11 March 2026 00:48:45 +0000 (0:00:00.547) 0:02:26.216 ******* 2026-03-11 00:50:34.361458 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:34.361461 | orchestrator | 2026-03-11 00:50:34.361465 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-11 00:50:34.361469 | orchestrator | Wednesday 11 March 2026 00:48:46 +0000 (0:00:00.650) 0:02:26.866 ******* 2026-03-11 00:50:34.361472 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:34.361476 | orchestrator | 2026-03-11 00:50:34.361480 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-11 00:50:34.361483 | orchestrator | Wednesday 11 March 2026 00:48:46 +0000 (0:00:00.513) 0:02:27.379 ******* 2026-03-11 00:50:34.361487 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-11 00:50:34.361491 | orchestrator | 2026-03-11 00:50:34.361494 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-11 00:50:34.361498 | orchestrator | Wednesday 11 March 2026 00:48:48 +0000 (0:00:01.360) 0:02:28.740 ******* 2026-03-11 00:50:34.361502 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-11 00:50:34.361506 | orchestrator | 2026-03-11 00:50:34.361509 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-11 00:50:34.361513 | orchestrator | Wednesday 11 March 2026 00:48:48 +0000 (0:00:00.725) 0:02:29.466 ******* 2026-03-11 00:50:34.361517 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:34.361520 | orchestrator | 2026-03-11 00:50:34.361524 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-11 00:50:34.361528 | orchestrator | Wednesday 11 March 2026 00:48:49 +0000 (0:00:00.464) 0:02:29.930 ******* 2026-03-11 00:50:34.361531 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:34.361535 | orchestrator | 2026-03-11 00:50:34.361539 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-11 00:50:34.361543 | orchestrator | 2026-03-11 00:50:34.361546 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-11 00:50:34.361550 | orchestrator | Wednesday 11 March 2026 00:48:49 +0000 (0:00:00.353) 0:02:30.284 ******* 2026-03-11 00:50:34.361554 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:34.361557 | orchestrator | 2026-03-11 00:50:34.361561 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-11 00:50:34.361565 | orchestrator | Wednesday 11 March 2026 00:48:49 +0000 (0:00:00.107) 0:02:30.391 ******* 2026-03-11 00:50:34.361568 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-11 00:50:34.361572 | orchestrator | 2026-03-11 00:50:34.361576 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-11 00:50:34.361580 | orchestrator | Wednesday 11 March 2026 00:48:49 +0000 (0:00:00.167) 0:02:30.559 ******* 2026-03-11 00:50:34.361585 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:34.361589 | orchestrator | 2026-03-11 00:50:34.361593 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-11 00:50:34.361597 | orchestrator | Wednesday 11 March 2026 00:48:50 +0000 (0:00:00.728) 0:02:31.288 ******* 2026-03-11 00:50:34.361600 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:34.361604 | orchestrator | 2026-03-11 00:50:34.361608 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-11 00:50:34.361611 | orchestrator | Wednesday 11 March 2026 00:48:51 +0000 (0:00:01.250) 0:02:32.538 ******* 2026-03-11 00:50:34.361615 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:34.361619 | orchestrator | 2026-03-11 00:50:34.361622 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-11 00:50:34.361626 | orchestrator | Wednesday 11 March 2026 00:48:52 +0000 (0:00:00.747) 0:02:33.285 ******* 2026-03-11 00:50:34.361630 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:34.361634 | orchestrator | 2026-03-11 00:50:34.361639 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-11 00:50:34.361643 | orchestrator | Wednesday 11 March 2026 00:48:53 +0000 (0:00:00.488) 0:02:33.774 ******* 2026-03-11 00:50:34.361647 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:34.361651 | orchestrator | 2026-03-11 00:50:34.361654 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-11 00:50:34.361658 | orchestrator | Wednesday 11 March 2026 00:48:59 +0000 (0:00:06.471) 0:02:40.246 ******* 2026-03-11 00:50:34.361662 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:34.361666 | orchestrator | 2026-03-11 00:50:34.361669 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-11 00:50:34.361673 | orchestrator | Wednesday 11 March 2026 00:49:14 +0000 (0:00:14.755) 0:02:55.001 ******* 2026-03-11 00:50:34.361677 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:34.361680 | orchestrator | 2026-03-11 00:50:34.361684 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-11 00:50:34.361688 | orchestrator | 2026-03-11 00:50:34.361691 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-11 00:50:34.361695 | orchestrator | Wednesday 11 March 2026 00:49:15 +0000 (0:00:00.862) 0:02:55.863 ******* 2026-03-11 00:50:34.361699 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.361703 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.361708 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.361712 | orchestrator | 2026-03-11 00:50:34.361716 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-11 00:50:34.361720 | orchestrator | Wednesday 11 March 2026 00:49:15 +0000 (0:00:00.301) 0:02:56.165 ******* 2026-03-11 00:50:34.361723 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.361727 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.361731 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.361734 | orchestrator | 2026-03-11 00:50:34.361738 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-11 00:50:34.361742 | orchestrator | Wednesday 11 March 2026 00:49:15 +0000 (0:00:00.261) 0:02:56.427 ******* 2026-03-11 00:50:34.361746 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:50:34.361750 | orchestrator | 2026-03-11 00:50:34.361753 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-11 00:50:34.361757 | orchestrator | Wednesday 11 March 2026 00:49:16 +0000 (0:00:00.523) 0:02:56.951 ******* 2026-03-11 00:50:34.361761 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-11 00:50:34.361764 | orchestrator | 2026-03-11 00:50:34.361768 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-11 00:50:34.361772 | orchestrator | Wednesday 11 March 2026 00:49:17 +0000 (0:00:00.715) 0:02:57.666 ******* 2026-03-11 00:50:34.361776 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 00:50:34.361780 | orchestrator | 2026-03-11 00:50:34.361783 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-11 00:50:34.361790 | orchestrator | Wednesday 11 March 2026 00:49:17 +0000 (0:00:00.760) 0:02:58.426 ******* 2026-03-11 00:50:34.361793 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.361797 | orchestrator | 2026-03-11 00:50:34.361801 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-11 00:50:34.361804 | orchestrator | Wednesday 11 March 2026 00:49:18 +0000 (0:00:00.228) 0:02:58.655 ******* 2026-03-11 00:50:34.361808 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 00:50:34.361812 | orchestrator | 2026-03-11 00:50:34.361815 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-11 00:50:34.361819 | orchestrator | Wednesday 11 March 2026 00:49:19 +0000 (0:00:00.975) 0:02:59.630 ******* 2026-03-11 00:50:34.361823 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.361826 | orchestrator | 2026-03-11 00:50:34.361830 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-11 00:50:34.361834 | orchestrator | Wednesday 11 March 2026 00:49:19 +0000 (0:00:00.103) 0:02:59.734 ******* 2026-03-11 00:50:34.361838 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.361841 | orchestrator | 2026-03-11 00:50:34.361846 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-11 00:50:34.361853 | orchestrator | Wednesday 11 March 2026 00:49:19 +0000 (0:00:00.102) 0:02:59.836 ******* 2026-03-11 00:50:34.361859 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.361865 | orchestrator | 2026-03-11 00:50:34.361955 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-11 00:50:34.361968 | orchestrator | Wednesday 11 March 2026 00:49:19 +0000 (0:00:00.109) 0:02:59.946 ******* 2026-03-11 00:50:34.361972 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.361976 | orchestrator | 2026-03-11 00:50:34.361980 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-11 00:50:34.361984 | orchestrator | Wednesday 11 March 2026 00:49:19 +0000 (0:00:00.115) 0:03:00.061 ******* 2026-03-11 00:50:34.361987 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-11 00:50:34.361991 | orchestrator | 2026-03-11 00:50:34.361995 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-11 00:50:34.361999 | orchestrator | Wednesday 11 March 2026 00:49:24 +0000 (0:00:05.238) 0:03:05.300 ******* 2026-03-11 00:50:34.362003 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-11 00:50:34.362006 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-11 00:50:34.362010 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-11 00:50:34.362049 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-11 00:50:34.362053 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-11 00:50:34.362057 | orchestrator | 2026-03-11 00:50:34.362060 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-11 00:50:34.362064 | orchestrator | Wednesday 11 March 2026 00:50:06 +0000 (0:00:41.597) 0:03:46.897 ******* 2026-03-11 00:50:34.362074 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 00:50:34.362078 | orchestrator | 2026-03-11 00:50:34.362081 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-11 00:50:34.362085 | orchestrator | Wednesday 11 March 2026 00:50:07 +0000 (0:00:01.138) 0:03:48.036 ******* 2026-03-11 00:50:34.362089 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-11 00:50:34.362093 | orchestrator | 2026-03-11 00:50:34.362097 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-11 00:50:34.362100 | orchestrator | Wednesday 11 March 2026 00:50:08 +0000 (0:00:01.537) 0:03:49.573 ******* 2026-03-11 00:50:34.362104 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-11 00:50:34.362108 | orchestrator | 2026-03-11 00:50:34.362112 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-11 00:50:34.362121 | orchestrator | Wednesday 11 March 2026 00:50:09 +0000 (0:00:00.948) 0:03:50.521 ******* 2026-03-11 00:50:34.362125 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.362129 | orchestrator | 2026-03-11 00:50:34.362133 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-11 00:50:34.362136 | orchestrator | Wednesday 11 March 2026 00:50:10 +0000 (0:00:00.111) 0:03:50.633 ******* 2026-03-11 00:50:34.362140 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-11 00:50:34.362147 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-11 00:50:34.362151 | orchestrator | 2026-03-11 00:50:34.362154 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-11 00:50:34.362158 | orchestrator | Wednesday 11 March 2026 00:50:11 +0000 (0:00:01.710) 0:03:52.344 ******* 2026-03-11 00:50:34.362162 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.362166 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.362169 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.362173 | orchestrator | 2026-03-11 00:50:34.362177 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-11 00:50:34.362180 | orchestrator | Wednesday 11 March 2026 00:50:12 +0000 (0:00:00.364) 0:03:52.708 ******* 2026-03-11 00:50:34.362184 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.362188 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.362192 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.362195 | orchestrator | 2026-03-11 00:50:34.362199 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-11 00:50:34.362203 | orchestrator | 2026-03-11 00:50:34.362207 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-11 00:50:34.362211 | orchestrator | Wednesday 11 March 2026 00:50:13 +0000 (0:00:01.102) 0:03:53.810 ******* 2026-03-11 00:50:34.362214 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:34.362218 | orchestrator | 2026-03-11 00:50:34.362222 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-11 00:50:34.362225 | orchestrator | Wednesday 11 March 2026 00:50:13 +0000 (0:00:00.114) 0:03:53.924 ******* 2026-03-11 00:50:34.362229 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-11 00:50:34.362233 | orchestrator | 2026-03-11 00:50:34.362237 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-11 00:50:34.362241 | orchestrator | Wednesday 11 March 2026 00:50:13 +0000 (0:00:00.171) 0:03:54.096 ******* 2026-03-11 00:50:34.362244 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:34.362248 | orchestrator | 2026-03-11 00:50:34.362252 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-11 00:50:34.362256 | orchestrator | 2026-03-11 00:50:34.362260 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-11 00:50:34.362263 | orchestrator | Wednesday 11 March 2026 00:50:18 +0000 (0:00:05.036) 0:03:59.132 ******* 2026-03-11 00:50:34.362267 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:50:34.362271 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:50:34.362275 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:50:34.362279 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:34.362283 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:34.362286 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:34.362291 | orchestrator | 2026-03-11 00:50:34.362295 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-11 00:50:34.362298 | orchestrator | Wednesday 11 March 2026 00:50:19 +0000 (0:00:00.799) 0:03:59.932 ******* 2026-03-11 00:50:34.362302 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-11 00:50:34.362306 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-11 00:50:34.362310 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-11 00:50:34.362317 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-11 00:50:34.362321 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-11 00:50:34.362324 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-11 00:50:34.362328 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-11 00:50:34.362332 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-11 00:50:34.362336 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-11 00:50:34.362339 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-11 00:50:34.362343 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-11 00:50:34.362347 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-11 00:50:34.362353 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-11 00:50:34.362357 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-11 00:50:34.362361 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-11 00:50:34.362364 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-11 00:50:34.362368 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-11 00:50:34.362372 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-11 00:50:34.362376 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-11 00:50:34.362379 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-11 00:50:34.362383 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-11 00:50:34.362387 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-11 00:50:34.362391 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-11 00:50:34.362397 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-11 00:50:34.362401 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-11 00:50:34.362405 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-11 00:50:34.362408 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-11 00:50:34.362412 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-11 00:50:34.362416 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-11 00:50:34.362420 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-11 00:50:34.362423 | orchestrator | 2026-03-11 00:50:34.362427 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-11 00:50:34.362431 | orchestrator | Wednesday 11 March 2026 00:50:31 +0000 (0:00:11.813) 0:04:11.746 ******* 2026-03-11 00:50:34.362435 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.362439 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.362443 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.362446 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.362450 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.362454 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.362458 | orchestrator | 2026-03-11 00:50:34.362461 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-11 00:50:34.362465 | orchestrator | Wednesday 11 March 2026 00:50:31 +0000 (0:00:00.561) 0:04:12.307 ******* 2026-03-11 00:50:34.362471 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:34.362475 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:34.362479 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:34.362483 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:34.362487 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:34.362490 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:34.362494 | orchestrator | 2026-03-11 00:50:34.362498 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:50:34.362502 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:50:34.362506 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-11 00:50:34.362510 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-11 00:50:34.362514 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-11 00:50:34.362518 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-11 00:50:34.362521 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-11 00:50:34.362525 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-11 00:50:34.362529 | orchestrator | 2026-03-11 00:50:34.362533 | orchestrator | 2026-03-11 00:50:34.362537 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:50:34.362540 | orchestrator | Wednesday 11 March 2026 00:50:32 +0000 (0:00:00.357) 0:04:12.665 ******* 2026-03-11 00:50:34.362544 | orchestrator | =============================================================================== 2026-03-11 00:50:34.362548 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.20s 2026-03-11 00:50:34.362552 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 41.60s 2026-03-11 00:50:34.362556 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.02s 2026-03-11 00:50:34.362562 | orchestrator | kubectl : Install required packages ------------------------------------ 14.76s 2026-03-11 00:50:34.362566 | orchestrator | Manage labels ---------------------------------------------------------- 11.81s 2026-03-11 00:50:34.362570 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.08s 2026-03-11 00:50:34.362573 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.47s 2026-03-11 00:50:34.362577 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.77s 2026-03-11 00:50:34.362581 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.24s 2026-03-11 00:50:34.362585 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.04s 2026-03-11 00:50:34.362588 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.08s 2026-03-11 00:50:34.362592 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.04s 2026-03-11 00:50:34.362596 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.30s 2026-03-11 00:50:34.362600 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.28s 2026-03-11 00:50:34.362605 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.18s 2026-03-11 00:50:34.362609 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.11s 2026-03-11 00:50:34.362616 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.89s 2026-03-11 00:50:34.362619 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.85s 2026-03-11 00:50:34.362623 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.76s 2026-03-11 00:50:34.362627 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.71s 2026-03-11 00:50:34.362631 | orchestrator | 2026-03-11 00:50:34 | INFO  | Task 61748691-3b66-4ed3-9344-f494b0306327 is in state STARTED 2026-03-11 00:50:34.362634 | orchestrator | 2026-03-11 00:50:34 | INFO  | Task 4c7c2f14-ba68-42be-8f93-6ca0f4c33e4f is in state STARTED 2026-03-11 00:50:34.362638 | orchestrator | 2026-03-11 00:50:34 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:34.362642 | orchestrator | 2026-03-11 00:50:34 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:34.362646 | orchestrator | 2026-03-11 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:37.394616 | orchestrator | 2026-03-11 00:50:37 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:37.394677 | orchestrator | 2026-03-11 00:50:37 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:37.395545 | orchestrator | 2026-03-11 00:50:37 | INFO  | Task 61748691-3b66-4ed3-9344-f494b0306327 is in state STARTED 2026-03-11 00:50:37.396165 | orchestrator | 2026-03-11 00:50:37 | INFO  | Task 4c7c2f14-ba68-42be-8f93-6ca0f4c33e4f is in state STARTED 2026-03-11 00:50:37.396656 | orchestrator | 2026-03-11 00:50:37 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:37.397353 | orchestrator | 2026-03-11 00:50:37 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:37.397374 | orchestrator | 2026-03-11 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:40.437105 | orchestrator | 2026-03-11 00:50:40 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:40.438158 | orchestrator | 2026-03-11 00:50:40 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:40.439856 | orchestrator | 2026-03-11 00:50:40 | INFO  | Task 61748691-3b66-4ed3-9344-f494b0306327 is in state STARTED 2026-03-11 00:50:40.441739 | orchestrator | 2026-03-11 00:50:40 | INFO  | Task 4c7c2f14-ba68-42be-8f93-6ca0f4c33e4f is in state SUCCESS 2026-03-11 00:50:40.444262 | orchestrator | 2026-03-11 00:50:40 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:40.446388 | orchestrator | 2026-03-11 00:50:40 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:40.446460 | orchestrator | 2026-03-11 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:43.482947 | orchestrator | 2026-03-11 00:50:43 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:43.483816 | orchestrator | 2026-03-11 00:50:43 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:43.484363 | orchestrator | 2026-03-11 00:50:43 | INFO  | Task 61748691-3b66-4ed3-9344-f494b0306327 is in state SUCCESS 2026-03-11 00:50:43.485206 | orchestrator | 2026-03-11 00:50:43 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:43.485968 | orchestrator | 2026-03-11 00:50:43 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:43.486002 | orchestrator | 2026-03-11 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:46.517285 | orchestrator | 2026-03-11 00:50:46 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:46.519113 | orchestrator | 2026-03-11 00:50:46 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:46.520744 | orchestrator | 2026-03-11 00:50:46 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:46.522092 | orchestrator | 2026-03-11 00:50:46 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:46.522130 | orchestrator | 2026-03-11 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:49.563718 | orchestrator | 2026-03-11 00:50:49 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:49.565564 | orchestrator | 2026-03-11 00:50:49 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:49.567241 | orchestrator | 2026-03-11 00:50:49 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:49.568826 | orchestrator | 2026-03-11 00:50:49 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:49.568867 | orchestrator | 2026-03-11 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:52.629364 | orchestrator | 2026-03-11 00:50:52 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:52.629945 | orchestrator | 2026-03-11 00:50:52 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:52.631009 | orchestrator | 2026-03-11 00:50:52 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:52.633158 | orchestrator | 2026-03-11 00:50:52 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:52.633187 | orchestrator | 2026-03-11 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:55.655553 | orchestrator | 2026-03-11 00:50:55 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:55.656872 | orchestrator | 2026-03-11 00:50:55 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:55.658230 | orchestrator | 2026-03-11 00:50:55 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:55.659597 | orchestrator | 2026-03-11 00:50:55 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:55.659634 | orchestrator | 2026-03-11 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:58.685989 | orchestrator | 2026-03-11 00:50:58 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:50:58.686079 | orchestrator | 2026-03-11 00:50:58 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:50:58.686400 | orchestrator | 2026-03-11 00:50:58 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:50:58.687524 | orchestrator | 2026-03-11 00:50:58 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:50:58.687558 | orchestrator | 2026-03-11 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:01.740261 | orchestrator | 2026-03-11 00:51:01 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:01.740322 | orchestrator | 2026-03-11 00:51:01 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:51:01.740863 | orchestrator | 2026-03-11 00:51:01 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:01.741523 | orchestrator | 2026-03-11 00:51:01 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:01.741569 | orchestrator | 2026-03-11 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:04.771067 | orchestrator | 2026-03-11 00:51:04 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:04.771255 | orchestrator | 2026-03-11 00:51:04 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:51:04.771989 | orchestrator | 2026-03-11 00:51:04 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:04.772558 | orchestrator | 2026-03-11 00:51:04 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:04.772582 | orchestrator | 2026-03-11 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:07.848548 | orchestrator | 2026-03-11 00:51:07 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:07.848722 | orchestrator | 2026-03-11 00:51:07 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:51:07.849428 | orchestrator | 2026-03-11 00:51:07 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:07.850094 | orchestrator | 2026-03-11 00:51:07 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:07.850123 | orchestrator | 2026-03-11 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:10.879702 | orchestrator | 2026-03-11 00:51:10 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:10.879872 | orchestrator | 2026-03-11 00:51:10 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:51:10.879966 | orchestrator | 2026-03-11 00:51:10 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:10.880731 | orchestrator | 2026-03-11 00:51:10 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:10.880816 | orchestrator | 2026-03-11 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:13.921406 | orchestrator | 2026-03-11 00:51:13 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:13.922655 | orchestrator | 2026-03-11 00:51:13 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:51:13.924234 | orchestrator | 2026-03-11 00:51:13 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:13.926245 | orchestrator | 2026-03-11 00:51:13 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:13.926310 | orchestrator | 2026-03-11 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:16.960079 | orchestrator | 2026-03-11 00:51:16 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:16.960397 | orchestrator | 2026-03-11 00:51:16 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:51:16.961205 | orchestrator | 2026-03-11 00:51:16 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:16.961634 | orchestrator | 2026-03-11 00:51:16 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:16.961669 | orchestrator | 2026-03-11 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:20.018264 | orchestrator | 2026-03-11 00:51:19 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:20.018336 | orchestrator | 2026-03-11 00:51:19 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state STARTED 2026-03-11 00:51:20.018364 | orchestrator | 2026-03-11 00:51:19 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:20.018369 | orchestrator | 2026-03-11 00:51:19 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:20.018373 | orchestrator | 2026-03-11 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:23.062987 | orchestrator | 2026-03-11 00:51:23 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:23.063063 | orchestrator | 2026-03-11 00:51:23 | INFO  | Task cfb23170-21cb-4bf3-8d4f-7ee31c7c1481 is in state SUCCESS 2026-03-11 00:51:23.063072 | orchestrator | 2026-03-11 00:51:23 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:23.063081 | orchestrator | 2026-03-11 00:51:23 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:23.063090 | orchestrator | 2026-03-11 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:23.065608 | orchestrator | 2026-03-11 00:51:23.065710 | orchestrator | 2026-03-11 00:51:23.065722 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-11 00:51:23.065730 | orchestrator | 2026-03-11 00:51:23.065737 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-11 00:51:23.065744 | orchestrator | Wednesday 11 March 2026 00:50:36 +0000 (0:00:00.124) 0:00:00.124 ******* 2026-03-11 00:51:23.065752 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-11 00:51:23.065759 | orchestrator | 2026-03-11 00:51:23.065765 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-11 00:51:23.065772 | orchestrator | Wednesday 11 March 2026 00:50:36 +0000 (0:00:00.647) 0:00:00.772 ******* 2026-03-11 00:51:23.065778 | orchestrator | changed: [testbed-manager] 2026-03-11 00:51:23.065794 | orchestrator | 2026-03-11 00:51:23.065802 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-11 00:51:23.065808 | orchestrator | Wednesday 11 March 2026 00:50:37 +0000 (0:00:01.101) 0:00:01.873 ******* 2026-03-11 00:51:23.065815 | orchestrator | changed: [testbed-manager] 2026-03-11 00:51:23.065821 | orchestrator | 2026-03-11 00:51:23.065827 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:51:23.065834 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:51:23.065842 | orchestrator | 2026-03-11 00:51:23.065848 | orchestrator | 2026-03-11 00:51:23.065855 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:51:23.065861 | orchestrator | Wednesday 11 March 2026 00:50:38 +0000 (0:00:00.483) 0:00:02.357 ******* 2026-03-11 00:51:23.065867 | orchestrator | =============================================================================== 2026-03-11 00:51:23.065873 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.10s 2026-03-11 00:51:23.065880 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.65s 2026-03-11 00:51:23.065901 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.48s 2026-03-11 00:51:23.065907 | orchestrator | 2026-03-11 00:51:23.065913 | orchestrator | 2026-03-11 00:51:23.065934 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-11 00:51:23.065941 | orchestrator | 2026-03-11 00:51:23.065947 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-11 00:51:23.065954 | orchestrator | Wednesday 11 March 2026 00:50:35 +0000 (0:00:00.123) 0:00:00.123 ******* 2026-03-11 00:51:23.065960 | orchestrator | ok: [testbed-manager] 2026-03-11 00:51:23.065967 | orchestrator | 2026-03-11 00:51:23.065971 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-11 00:51:23.065974 | orchestrator | Wednesday 11 March 2026 00:50:36 +0000 (0:00:00.528) 0:00:00.651 ******* 2026-03-11 00:51:23.065993 | orchestrator | ok: [testbed-manager] 2026-03-11 00:51:23.065998 | orchestrator | 2026-03-11 00:51:23.066001 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-11 00:51:23.066005 | orchestrator | Wednesday 11 March 2026 00:50:36 +0000 (0:00:00.525) 0:00:01.177 ******* 2026-03-11 00:51:23.066009 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-11 00:51:23.066054 | orchestrator | 2026-03-11 00:51:23.066059 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-11 00:51:23.066062 | orchestrator | Wednesday 11 March 2026 00:50:37 +0000 (0:00:00.665) 0:00:01.842 ******* 2026-03-11 00:51:23.066066 | orchestrator | changed: [testbed-manager] 2026-03-11 00:51:23.066070 | orchestrator | 2026-03-11 00:51:23.066074 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-11 00:51:23.066078 | orchestrator | Wednesday 11 March 2026 00:50:38 +0000 (0:00:01.333) 0:00:03.175 ******* 2026-03-11 00:51:23.066081 | orchestrator | changed: [testbed-manager] 2026-03-11 00:51:23.066085 | orchestrator | 2026-03-11 00:51:23.066089 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-11 00:51:23.066092 | orchestrator | Wednesday 11 March 2026 00:50:39 +0000 (0:00:00.535) 0:00:03.711 ******* 2026-03-11 00:51:23.066096 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-11 00:51:23.066100 | orchestrator | 2026-03-11 00:51:23.066104 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-11 00:51:23.066107 | orchestrator | Wednesday 11 March 2026 00:50:40 +0000 (0:00:01.362) 0:00:05.073 ******* 2026-03-11 00:51:23.066111 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-11 00:51:23.066115 | orchestrator | 2026-03-11 00:51:23.066118 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-11 00:51:23.066122 | orchestrator | Wednesday 11 March 2026 00:50:41 +0000 (0:00:01.049) 0:00:06.122 ******* 2026-03-11 00:51:23.066126 | orchestrator | ok: [testbed-manager] 2026-03-11 00:51:23.066129 | orchestrator | 2026-03-11 00:51:23.066133 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-11 00:51:23.066137 | orchestrator | Wednesday 11 March 2026 00:50:42 +0000 (0:00:00.443) 0:00:06.566 ******* 2026-03-11 00:51:23.066141 | orchestrator | ok: [testbed-manager] 2026-03-11 00:51:23.066145 | orchestrator | 2026-03-11 00:51:23.066148 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:51:23.066152 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:51:23.066156 | orchestrator | 2026-03-11 00:51:23.066160 | orchestrator | 2026-03-11 00:51:23.066165 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:51:23.066169 | orchestrator | Wednesday 11 March 2026 00:50:42 +0000 (0:00:00.351) 0:00:06.917 ******* 2026-03-11 00:51:23.066173 | orchestrator | =============================================================================== 2026-03-11 00:51:23.066178 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.36s 2026-03-11 00:51:23.066182 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.33s 2026-03-11 00:51:23.066186 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.05s 2026-03-11 00:51:23.066203 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.67s 2026-03-11 00:51:23.066207 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.54s 2026-03-11 00:51:23.066212 | orchestrator | Get home directory of operator user ------------------------------------- 0.53s 2026-03-11 00:51:23.066216 | orchestrator | Create .kube directory -------------------------------------------------- 0.53s 2026-03-11 00:51:23.066220 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.44s 2026-03-11 00:51:23.066225 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.35s 2026-03-11 00:51:23.066229 | orchestrator | 2026-03-11 00:51:23.066233 | orchestrator | 2026-03-11 00:51:23.066242 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-11 00:51:23.066246 | orchestrator | 2026-03-11 00:51:23.066251 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-11 00:51:23.066255 | orchestrator | Wednesday 11 March 2026 00:49:11 +0000 (0:00:00.111) 0:00:00.111 ******* 2026-03-11 00:51:23.066259 | orchestrator | ok: [localhost] => { 2026-03-11 00:51:23.066264 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-11 00:51:23.066269 | orchestrator | } 2026-03-11 00:51:23.066274 | orchestrator | 2026-03-11 00:51:23.066278 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-11 00:51:23.066282 | orchestrator | Wednesday 11 March 2026 00:49:11 +0000 (0:00:00.103) 0:00:00.215 ******* 2026-03-11 00:51:23.066287 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-11 00:51:23.066293 | orchestrator | ...ignoring 2026-03-11 00:51:23.066298 | orchestrator | 2026-03-11 00:51:23.066302 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-11 00:51:23.066307 | orchestrator | Wednesday 11 March 2026 00:49:15 +0000 (0:00:03.314) 0:00:03.529 ******* 2026-03-11 00:51:23.066311 | orchestrator | skipping: [localhost] 2026-03-11 00:51:23.066315 | orchestrator | 2026-03-11 00:51:23.066320 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-11 00:51:23.066324 | orchestrator | Wednesday 11 March 2026 00:49:15 +0000 (0:00:00.206) 0:00:03.736 ******* 2026-03-11 00:51:23.066328 | orchestrator | ok: [localhost] 2026-03-11 00:51:23.066333 | orchestrator | 2026-03-11 00:51:23.066337 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:51:23.066341 | orchestrator | 2026-03-11 00:51:23.066346 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:51:23.066350 | orchestrator | Wednesday 11 March 2026 00:49:15 +0000 (0:00:00.458) 0:00:04.195 ******* 2026-03-11 00:51:23.066354 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:23.066359 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:23.066363 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:23.066367 | orchestrator | 2026-03-11 00:51:23.066372 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:51:23.066376 | orchestrator | Wednesday 11 March 2026 00:49:16 +0000 (0:00:00.835) 0:00:05.031 ******* 2026-03-11 00:51:23.066381 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-11 00:51:23.066385 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-11 00:51:23.066389 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-11 00:51:23.066393 | orchestrator | 2026-03-11 00:51:23.066396 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-11 00:51:23.066400 | orchestrator | 2026-03-11 00:51:23.066404 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-11 00:51:23.066407 | orchestrator | Wednesday 11 March 2026 00:49:17 +0000 (0:00:00.778) 0:00:05.809 ******* 2026-03-11 00:51:23.066411 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:51:23.066415 | orchestrator | 2026-03-11 00:51:23.066419 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-11 00:51:23.066422 | orchestrator | Wednesday 11 March 2026 00:49:18 +0000 (0:00:00.878) 0:00:06.688 ******* 2026-03-11 00:51:23.066426 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:23.066430 | orchestrator | 2026-03-11 00:51:23.066433 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-11 00:51:23.066437 | orchestrator | Wednesday 11 March 2026 00:49:19 +0000 (0:00:01.319) 0:00:08.008 ******* 2026-03-11 00:51:23.066441 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:23.066445 | orchestrator | 2026-03-11 00:51:23.066448 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-11 00:51:23.066455 | orchestrator | Wednesday 11 March 2026 00:49:19 +0000 (0:00:00.398) 0:00:08.407 ******* 2026-03-11 00:51:23.066459 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:23.066462 | orchestrator | 2026-03-11 00:51:23.066466 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-11 00:51:23.066470 | orchestrator | Wednesday 11 March 2026 00:49:20 +0000 (0:00:00.394) 0:00:08.801 ******* 2026-03-11 00:51:23.066473 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:23.066477 | orchestrator | 2026-03-11 00:51:23.066481 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-11 00:51:23.066484 | orchestrator | Wednesday 11 March 2026 00:49:20 +0000 (0:00:00.343) 0:00:09.145 ******* 2026-03-11 00:51:23.066488 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:23.066492 | orchestrator | 2026-03-11 00:51:23.066495 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-11 00:51:23.066499 | orchestrator | Wednesday 11 March 2026 00:49:21 +0000 (0:00:00.577) 0:00:09.722 ******* 2026-03-11 00:51:23.066503 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:51:23.066507 | orchestrator | 2026-03-11 00:51:23.066510 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-11 00:51:23.066517 | orchestrator | Wednesday 11 March 2026 00:49:22 +0000 (0:00:01.020) 0:00:10.743 ******* 2026-03-11 00:51:23.066521 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:23.066524 | orchestrator | 2026-03-11 00:51:23.066528 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-11 00:51:23.066532 | orchestrator | Wednesday 11 March 2026 00:49:23 +0000 (0:00:00.902) 0:00:11.645 ******* 2026-03-11 00:51:23.066536 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:23.066539 | orchestrator | 2026-03-11 00:51:23.066543 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-11 00:51:23.066547 | orchestrator | Wednesday 11 March 2026 00:49:23 +0000 (0:00:00.360) 0:00:12.005 ******* 2026-03-11 00:51:23.066551 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:23.066554 | orchestrator | 2026-03-11 00:51:23.066558 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-11 00:51:23.066562 | orchestrator | Wednesday 11 March 2026 00:49:23 +0000 (0:00:00.360) 0:00:12.366 ******* 2026-03-11 00:51:23.066616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:51:23.066628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:51:23.066636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:51:23.066641 | orchestrator | 2026-03-11 00:51:23.066645 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-11 00:51:23.066649 | orchestrator | Wednesday 11 March 2026 00:49:25 +0000 (0:00:01.086) 0:00:13.452 ******* 2026-03-11 00:51:23.066657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:51:23.066663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:51:23.066671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:51:23.066675 | orchestrator | 2026-03-11 00:51:23.066679 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-11 00:51:23.066683 | orchestrator | Wednesday 11 March 2026 00:49:27 +0000 (0:00:02.826) 0:00:16.281 ******* 2026-03-11 00:51:23.066686 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-11 00:51:23.066691 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-11 00:51:23.066694 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-11 00:51:23.066698 | orchestrator | 2026-03-11 00:51:23.066702 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-11 00:51:23.066706 | orchestrator | Wednesday 11 March 2026 00:49:30 +0000 (0:00:02.933) 0:00:19.215 ******* 2026-03-11 00:51:23.066710 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-11 00:51:23.066713 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-11 00:51:23.066717 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-11 00:51:23.066721 | orchestrator | 2026-03-11 00:51:23.066727 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-11 00:51:23.066731 | orchestrator | Wednesday 11 March 2026 00:49:33 +0000 (0:00:02.219) 0:00:21.435 ******* 2026-03-11 00:51:23.066735 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-11 00:51:23.066739 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-11 00:51:23.066742 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-11 00:51:23.066746 | orchestrator | 2026-03-11 00:51:23.066750 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-11 00:51:23.066754 | orchestrator | Wednesday 11 March 2026 00:49:34 +0000 (0:00:01.479) 0:00:22.914 ******* 2026-03-11 00:51:23.066757 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-11 00:51:23.066761 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-11 00:51:23.066765 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-11 00:51:23.066769 | orchestrator | 2026-03-11 00:51:23.066773 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-11 00:51:23.066776 | orchestrator | Wednesday 11 March 2026 00:49:36 +0000 (0:00:02.123) 0:00:25.037 ******* 2026-03-11 00:51:23.066780 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-11 00:51:23.066784 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-11 00:51:23.066791 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-11 00:51:23.066795 | orchestrator | 2026-03-11 00:51:23.066798 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-11 00:51:23.066804 | orchestrator | Wednesday 11 March 2026 00:49:38 +0000 (0:00:01.772) 0:00:26.810 ******* 2026-03-11 00:51:23.066808 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-11 00:51:23.066812 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-11 00:51:23.066816 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-11 00:51:23.066820 | orchestrator | 2026-03-11 00:51:23.066823 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-11 00:51:23.066827 | orchestrator | Wednesday 11 March 2026 00:49:39 +0000 (0:00:01.561) 0:00:28.372 ******* 2026-03-11 00:51:23.066831 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:23.066835 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:23.066839 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:23.066842 | orchestrator | 2026-03-11 00:51:23.066846 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-11 00:51:23.066850 | orchestrator | Wednesday 11 March 2026 00:49:40 +0000 (0:00:00.436) 0:00:28.808 ******* 2026-03-11 00:51:23.066854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:51:23.066863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:51:23.066867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:51:23.066875 | orchestrator | 2026-03-11 00:51:23.066878 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-11 00:51:23.066962 | orchestrator | Wednesday 11 March 2026 00:49:41 +0000 (0:00:01.505) 0:00:30.313 ******* 2026-03-11 00:51:23.066968 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:51:23.066972 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:51:23.066976 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:51:23.066979 | orchestrator | 2026-03-11 00:51:23.066983 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-11 00:51:23.066987 | orchestrator | Wednesday 11 March 2026 00:49:42 +0000 (0:00:00.795) 0:00:31.109 ******* 2026-03-11 00:51:23.066990 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:51:23.066994 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:51:23.066998 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:51:23.067002 | orchestrator | 2026-03-11 00:51:23.067006 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-11 00:51:23.067009 | orchestrator | Wednesday 11 March 2026 00:49:48 +0000 (0:00:05.494) 0:00:36.603 ******* 2026-03-11 00:51:23.067013 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:51:23.067017 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:51:23.067020 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:51:23.067024 | orchestrator | 2026-03-11 00:51:23.067028 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-11 00:51:23.067032 | orchestrator | 2026-03-11 00:51:23.067035 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-11 00:51:23.067039 | orchestrator | Wednesday 11 March 2026 00:49:48 +0000 (0:00:00.346) 0:00:36.949 ******* 2026-03-11 00:51:23.067043 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:23.067047 | orchestrator | 2026-03-11 00:51:23.067050 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-11 00:51:23.067054 | orchestrator | Wednesday 11 March 2026 00:49:49 +0000 (0:00:00.578) 0:00:37.528 ******* 2026-03-11 00:51:23.067058 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:23.067062 | orchestrator | 2026-03-11 00:51:23.067065 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-11 00:51:23.067069 | orchestrator | Wednesday 11 March 2026 00:49:49 +0000 (0:00:00.225) 0:00:37.753 ******* 2026-03-11 00:51:23.067073 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:51:23.067076 | orchestrator | 2026-03-11 00:51:23.067080 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-11 00:51:23.067084 | orchestrator | Wednesday 11 March 2026 00:49:51 +0000 (0:00:01.885) 0:00:39.638 ******* 2026-03-11 00:51:23.067088 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:51:23.067091 | orchestrator | 2026-03-11 00:51:23.067095 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-11 00:51:23.067099 | orchestrator | 2026-03-11 00:51:23.067103 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-11 00:51:23.067107 | orchestrator | Wednesday 11 March 2026 00:50:45 +0000 (0:00:54.014) 0:01:33.653 ******* 2026-03-11 00:51:23.067110 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:23.067114 | orchestrator | 2026-03-11 00:51:23.067118 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-11 00:51:23.067121 | orchestrator | Wednesday 11 March 2026 00:50:45 +0000 (0:00:00.579) 0:01:34.232 ******* 2026-03-11 00:51:23.067128 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:23.067132 | orchestrator | 2026-03-11 00:51:23.067136 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-11 00:51:23.067140 | orchestrator | Wednesday 11 March 2026 00:50:46 +0000 (0:00:00.204) 0:01:34.436 ******* 2026-03-11 00:51:23.067143 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:51:23.067147 | orchestrator | 2026-03-11 00:51:23.067151 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-11 00:51:23.067154 | orchestrator | Wednesday 11 March 2026 00:50:47 +0000 (0:00:01.681) 0:01:36.117 ******* 2026-03-11 00:51:23.067158 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:51:23.067162 | orchestrator | 2026-03-11 00:51:23.067165 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-11 00:51:23.067169 | orchestrator | 2026-03-11 00:51:23.067173 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-11 00:51:23.067179 | orchestrator | Wednesday 11 March 2026 00:51:00 +0000 (0:00:12.623) 0:01:48.741 ******* 2026-03-11 00:51:23.067183 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:23.067187 | orchestrator | 2026-03-11 00:51:23.067191 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-11 00:51:23.067194 | orchestrator | Wednesday 11 March 2026 00:51:00 +0000 (0:00:00.539) 0:01:49.280 ******* 2026-03-11 00:51:23.067198 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:23.067202 | orchestrator | 2026-03-11 00:51:23.067206 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-11 00:51:23.067209 | orchestrator | Wednesday 11 March 2026 00:51:01 +0000 (0:00:00.402) 0:01:49.682 ******* 2026-03-11 00:51:23.067213 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:51:23.067217 | orchestrator | 2026-03-11 00:51:23.067220 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-11 00:51:23.067224 | orchestrator | Wednesday 11 March 2026 00:51:07 +0000 (0:00:06.577) 0:01:56.260 ******* 2026-03-11 00:51:23.067228 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:51:23.067232 | orchestrator | 2026-03-11 00:51:23.067235 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-11 00:51:23.067239 | orchestrator | 2026-03-11 00:51:23.067243 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-11 00:51:23.067246 | orchestrator | Wednesday 11 March 2026 00:51:17 +0000 (0:00:09.497) 0:02:05.758 ******* 2026-03-11 00:51:23.067250 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:51:23.067254 | orchestrator | 2026-03-11 00:51:23.067258 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-11 00:51:23.067261 | orchestrator | Wednesday 11 March 2026 00:51:18 +0000 (0:00:00.678) 0:02:06.436 ******* 2026-03-11 00:51:23.067265 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-11 00:51:23.067269 | orchestrator | enable_outward_rabbitmq_True 2026-03-11 00:51:23.067273 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-11 00:51:23.067276 | orchestrator | outward_rabbitmq_restart 2026-03-11 00:51:23.067280 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:23.067287 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:23.067291 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:23.067294 | orchestrator | 2026-03-11 00:51:23.067298 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-11 00:51:23.067302 | orchestrator | skipping: no hosts matched 2026-03-11 00:51:23.067306 | orchestrator | 2026-03-11 00:51:23.067309 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-11 00:51:23.067313 | orchestrator | skipping: no hosts matched 2026-03-11 00:51:23.067317 | orchestrator | 2026-03-11 00:51:23.067321 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-11 00:51:23.067324 | orchestrator | skipping: no hosts matched 2026-03-11 00:51:23.067328 | orchestrator | 2026-03-11 00:51:23.067335 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:51:23.067339 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-11 00:51:23.067343 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-11 00:51:23.067347 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:51:23.067351 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:51:23.067355 | orchestrator | 2026-03-11 00:51:23.067358 | orchestrator | 2026-03-11 00:51:23.067362 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:51:23.067366 | orchestrator | Wednesday 11 March 2026 00:51:20 +0000 (0:00:02.804) 0:02:09.241 ******* 2026-03-11 00:51:23.067370 | orchestrator | =============================================================================== 2026-03-11 00:51:23.067373 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 76.14s 2026-03-11 00:51:23.067377 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.14s 2026-03-11 00:51:23.067381 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 5.49s 2026-03-11 00:51:23.067384 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.31s 2026-03-11 00:51:23.067388 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.93s 2026-03-11 00:51:23.067392 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.83s 2026-03-11 00:51:23.067396 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.80s 2026-03-11 00:51:23.067413 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.22s 2026-03-11 00:51:23.067417 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.12s 2026-03-11 00:51:23.067421 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.77s 2026-03-11 00:51:23.067425 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.70s 2026-03-11 00:51:23.067428 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.56s 2026-03-11 00:51:23.067432 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.51s 2026-03-11 00:51:23.067436 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.48s 2026-03-11 00:51:23.067440 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.32s 2026-03-11 00:51:23.067446 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.09s 2026-03-11 00:51:23.067450 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.02s 2026-03-11 00:51:23.067453 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.90s 2026-03-11 00:51:23.067457 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.88s 2026-03-11 00:51:23.067461 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.84s 2026-03-11 00:51:26.092966 | orchestrator | 2026-03-11 00:51:26 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:26.093175 | orchestrator | 2026-03-11 00:51:26 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:26.094323 | orchestrator | 2026-03-11 00:51:26 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:26.094353 | orchestrator | 2026-03-11 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:29.160951 | orchestrator | 2026-03-11 00:51:29 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:29.163219 | orchestrator | 2026-03-11 00:51:29 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:29.165447 | orchestrator | 2026-03-11 00:51:29 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:29.166048 | orchestrator | 2026-03-11 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:32.211830 | orchestrator | 2026-03-11 00:51:32 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:32.213135 | orchestrator | 2026-03-11 00:51:32 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:32.213781 | orchestrator | 2026-03-11 00:51:32 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:32.213811 | orchestrator | 2026-03-11 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:35.256879 | orchestrator | 2026-03-11 00:51:35 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:35.258573 | orchestrator | 2026-03-11 00:51:35 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:35.260211 | orchestrator | 2026-03-11 00:51:35 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:35.260586 | orchestrator | 2026-03-11 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:38.307572 | orchestrator | 2026-03-11 00:51:38 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:38.310641 | orchestrator | 2026-03-11 00:51:38 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:38.312605 | orchestrator | 2026-03-11 00:51:38 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:38.313059 | orchestrator | 2026-03-11 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:41.343551 | orchestrator | 2026-03-11 00:51:41 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:41.345958 | orchestrator | 2026-03-11 00:51:41 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:41.346554 | orchestrator | 2026-03-11 00:51:41 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:41.346585 | orchestrator | 2026-03-11 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:44.391971 | orchestrator | 2026-03-11 00:51:44 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:44.393479 | orchestrator | 2026-03-11 00:51:44 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:44.395517 | orchestrator | 2026-03-11 00:51:44 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:44.395584 | orchestrator | 2026-03-11 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:47.434342 | orchestrator | 2026-03-11 00:51:47 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:47.434403 | orchestrator | 2026-03-11 00:51:47 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:47.435380 | orchestrator | 2026-03-11 00:51:47 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:47.435469 | orchestrator | 2026-03-11 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:50.476427 | orchestrator | 2026-03-11 00:51:50 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:50.477661 | orchestrator | 2026-03-11 00:51:50 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:50.479009 | orchestrator | 2026-03-11 00:51:50 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:50.479067 | orchestrator | 2026-03-11 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:53.515730 | orchestrator | 2026-03-11 00:51:53 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:53.516509 | orchestrator | 2026-03-11 00:51:53 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:53.519435 | orchestrator | 2026-03-11 00:51:53 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:53.519486 | orchestrator | 2026-03-11 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:56.552975 | orchestrator | 2026-03-11 00:51:56 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:56.554908 | orchestrator | 2026-03-11 00:51:56 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:56.556545 | orchestrator | 2026-03-11 00:51:56 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:56.556824 | orchestrator | 2026-03-11 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:59.596560 | orchestrator | 2026-03-11 00:51:59 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:51:59.598865 | orchestrator | 2026-03-11 00:51:59 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:51:59.601126 | orchestrator | 2026-03-11 00:51:59 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:51:59.601176 | orchestrator | 2026-03-11 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:02.634415 | orchestrator | 2026-03-11 00:52:02 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:52:02.634485 | orchestrator | 2026-03-11 00:52:02 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:02.637336 | orchestrator | 2026-03-11 00:52:02 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:02.637374 | orchestrator | 2026-03-11 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:05.688818 | orchestrator | 2026-03-11 00:52:05 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:52:05.688931 | orchestrator | 2026-03-11 00:52:05 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:05.688943 | orchestrator | 2026-03-11 00:52:05 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:05.688952 | orchestrator | 2026-03-11 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:08.726869 | orchestrator | 2026-03-11 00:52:08 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:52:08.727942 | orchestrator | 2026-03-11 00:52:08 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:08.732926 | orchestrator | 2026-03-11 00:52:08 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:08.733335 | orchestrator | 2026-03-11 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:11.776963 | orchestrator | 2026-03-11 00:52:11 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:52:11.778323 | orchestrator | 2026-03-11 00:52:11 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:11.780356 | orchestrator | 2026-03-11 00:52:11 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:11.780552 | orchestrator | 2026-03-11 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:14.818791 | orchestrator | 2026-03-11 00:52:14 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:52:14.820677 | orchestrator | 2026-03-11 00:52:14 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:14.822854 | orchestrator | 2026-03-11 00:52:14 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:14.822941 | orchestrator | 2026-03-11 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:17.860249 | orchestrator | 2026-03-11 00:52:17 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:52:17.862424 | orchestrator | 2026-03-11 00:52:17 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:17.866848 | orchestrator | 2026-03-11 00:52:17 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:17.867016 | orchestrator | 2026-03-11 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:20.908584 | orchestrator | 2026-03-11 00:52:20 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:52:20.908650 | orchestrator | 2026-03-11 00:52:20 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:20.911165 | orchestrator | 2026-03-11 00:52:20 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:20.911231 | orchestrator | 2026-03-11 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:23.966395 | orchestrator | 2026-03-11 00:52:23 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state STARTED 2026-03-11 00:52:23.967510 | orchestrator | 2026-03-11 00:52:23 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:23.969909 | orchestrator | 2026-03-11 00:52:23 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:23.969973 | orchestrator | 2026-03-11 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:27.018621 | orchestrator | 2026-03-11 00:52:27 | INFO  | Task eeffa1ef-2e0e-4fe6-85a1-12f428142927 is in state SUCCESS 2026-03-11 00:52:27.020441 | orchestrator | 2026-03-11 00:52:27.020491 | orchestrator | 2026-03-11 00:52:27.020510 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:52:27.020517 | orchestrator | 2026-03-11 00:52:27.020522 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:52:27.020527 | orchestrator | Wednesday 11 March 2026 00:50:02 +0000 (0:00:00.155) 0:00:00.155 ******* 2026-03-11 00:52:27.020532 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:52:27.020538 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:52:27.020543 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:52:27.020548 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.020553 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.020558 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.020563 | orchestrator | 2026-03-11 00:52:27.020568 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:52:27.020573 | orchestrator | Wednesday 11 March 2026 00:50:03 +0000 (0:00:00.653) 0:00:00.808 ******* 2026-03-11 00:52:27.020578 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-11 00:52:27.020583 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-11 00:52:27.020588 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-11 00:52:27.020593 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-11 00:52:27.020611 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-11 00:52:27.020616 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-11 00:52:27.020621 | orchestrator | 2026-03-11 00:52:27.020626 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-11 00:52:27.020631 | orchestrator | 2026-03-11 00:52:27.020636 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-11 00:52:27.020641 | orchestrator | Wednesday 11 March 2026 00:50:04 +0000 (0:00:01.180) 0:00:01.989 ******* 2026-03-11 00:52:27.020646 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:52:27.020651 | orchestrator | 2026-03-11 00:52:27.020656 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-11 00:52:27.020661 | orchestrator | Wednesday 11 March 2026 00:50:05 +0000 (0:00:01.320) 0:00:03.309 ******* 2026-03-11 00:52:27.020668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.020675 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.020684 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.020692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.020700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.020724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.020733 | orchestrator | 2026-03-11 00:52:27.020741 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-11 00:52:27.020749 | orchestrator | Wednesday 11 March 2026 00:50:07 +0000 (0:00:01.253) 0:00:04.563 ******* 2026-03-11 00:52:27.020763 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.020772 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.020780 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.020789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.020798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.020956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.020966 | orchestrator | 2026-03-11 00:52:27.020974 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-11 00:52:27.020983 | orchestrator | Wednesday 11 March 2026 00:50:08 +0000 (0:00:01.600) 0:00:06.164 ******* 2026-03-11 00:52:27.020991 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021000 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021021 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021238 | orchestrator | 2026-03-11 00:52:27.021244 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-11 00:52:27.021250 | orchestrator | Wednesday 11 March 2026 00:50:09 +0000 (0:00:01.066) 0:00:07.230 ******* 2026-03-11 00:52:27.021256 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021274 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021301 | orchestrator | 2026-03-11 00:52:27.021306 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-11 00:52:27.021311 | orchestrator | Wednesday 11 March 2026 00:50:11 +0000 (0:00:02.029) 0:00:09.260 ******* 2026-03-11 00:52:27.021316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021326 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.021345 | orchestrator | 2026-03-11 00:52:27.021350 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-11 00:52:27.021355 | orchestrator | Wednesday 11 March 2026 00:50:13 +0000 (0:00:01.668) 0:00:10.928 ******* 2026-03-11 00:52:27.021363 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:52:27.021369 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:52:27.021374 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:52:27.021379 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:52:27.021384 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:52:27.021389 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:52:27.021393 | orchestrator | 2026-03-11 00:52:27.021398 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-11 00:52:27.021403 | orchestrator | Wednesday 11 March 2026 00:50:15 +0000 (0:00:02.378) 0:00:13.307 ******* 2026-03-11 00:52:27.021408 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-11 00:52:27.021413 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-11 00:52:27.021417 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-11 00:52:27.021426 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-11 00:52:27.021438 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-11 00:52:27.021443 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-11 00:52:27.021448 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-11 00:52:27.021453 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-11 00:52:27.021458 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-11 00:52:27.021462 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-11 00:52:27.021467 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-11 00:52:27.021472 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-11 00:52:27.021477 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-11 00:52:27.021483 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-11 00:52:27.021488 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-11 00:52:27.021492 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-11 00:52:27.021497 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-11 00:52:27.021502 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-11 00:52:27.021507 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-11 00:52:27.021512 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-11 00:52:27.021517 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-11 00:52:27.021522 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-11 00:52:27.021528 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-11 00:52:27.021537 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-11 00:52:27.021554 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-11 00:52:27.021564 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-11 00:52:27.021572 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-11 00:52:27.021579 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-11 00:52:27.021587 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-11 00:52:27.021594 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-11 00:52:27.021602 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-11 00:52:27.021611 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-11 00:52:27.021618 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-11 00:52:27.021627 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-11 00:52:27.021635 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-11 00:52:27.021643 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-11 00:52:27.021648 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-11 00:52:27.021653 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-11 00:52:27.021658 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-11 00:52:27.021663 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-11 00:52:27.021674 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-11 00:52:27.021686 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-11 00:52:27.021695 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-11 00:52:27.021703 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-11 00:52:27.021710 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-11 00:52:27.021719 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-11 00:52:27.021727 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-11 00:52:27.021736 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-11 00:52:27.021744 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-11 00:52:27.021753 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-11 00:52:27.021758 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-11 00:52:27.021763 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-11 00:52:27.021768 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-11 00:52:27.021777 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-11 00:52:27.021782 | orchestrator | 2026-03-11 00:52:27.021787 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-11 00:52:27.021792 | orchestrator | Wednesday 11 March 2026 00:50:34 +0000 (0:00:18.548) 0:00:31.855 ******* 2026-03-11 00:52:27.021797 | orchestrator | 2026-03-11 00:52:27.021802 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-11 00:52:27.021807 | orchestrator | Wednesday 11 March 2026 00:50:34 +0000 (0:00:00.124) 0:00:31.979 ******* 2026-03-11 00:52:27.021811 | orchestrator | 2026-03-11 00:52:27.021816 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-11 00:52:27.021821 | orchestrator | Wednesday 11 March 2026 00:50:34 +0000 (0:00:00.114) 0:00:32.093 ******* 2026-03-11 00:52:27.021826 | orchestrator | 2026-03-11 00:52:27.021831 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-11 00:52:27.021835 | orchestrator | Wednesday 11 March 2026 00:50:34 +0000 (0:00:00.064) 0:00:32.158 ******* 2026-03-11 00:52:27.021840 | orchestrator | 2026-03-11 00:52:27.021845 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-11 00:52:27.021850 | orchestrator | Wednesday 11 March 2026 00:50:34 +0000 (0:00:00.063) 0:00:32.222 ******* 2026-03-11 00:52:27.021854 | orchestrator | 2026-03-11 00:52:27.021859 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-11 00:52:27.021940 | orchestrator | Wednesday 11 March 2026 00:50:34 +0000 (0:00:00.061) 0:00:32.283 ******* 2026-03-11 00:52:27.021946 | orchestrator | 2026-03-11 00:52:27.021951 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-11 00:52:27.021956 | orchestrator | Wednesday 11 March 2026 00:50:35 +0000 (0:00:00.066) 0:00:32.350 ******* 2026-03-11 00:52:27.021961 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:52:27.021966 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.021971 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:52:27.021975 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:52:27.021980 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.021985 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.021990 | orchestrator | 2026-03-11 00:52:27.021995 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-11 00:52:27.022000 | orchestrator | Wednesday 11 March 2026 00:50:37 +0000 (0:00:02.059) 0:00:34.409 ******* 2026-03-11 00:52:27.022005 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:52:27.022010 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:52:27.022049 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:52:27.022054 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:52:27.022059 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:52:27.022064 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:52:27.022069 | orchestrator | 2026-03-11 00:52:27.022073 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-11 00:52:27.022078 | orchestrator | 2026-03-11 00:52:27.022083 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-11 00:52:27.022088 | orchestrator | Wednesday 11 March 2026 00:51:03 +0000 (0:00:26.666) 0:01:01.075 ******* 2026-03-11 00:52:27.022093 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:52:27.022097 | orchestrator | 2026-03-11 00:52:27.022102 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-11 00:52:27.022107 | orchestrator | Wednesday 11 March 2026 00:51:04 +0000 (0:00:00.568) 0:01:01.644 ******* 2026-03-11 00:52:27.022112 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:52:27.022117 | orchestrator | 2026-03-11 00:52:27.022127 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-11 00:52:27.022140 | orchestrator | Wednesday 11 March 2026 00:51:04 +0000 (0:00:00.467) 0:01:02.111 ******* 2026-03-11 00:52:27.022145 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.022150 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.022155 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.022160 | orchestrator | 2026-03-11 00:52:27.022165 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-11 00:52:27.022169 | orchestrator | Wednesday 11 March 2026 00:51:05 +0000 (0:00:00.884) 0:01:02.995 ******* 2026-03-11 00:52:27.022174 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.022179 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.022184 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.022189 | orchestrator | 2026-03-11 00:52:27.022194 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-11 00:52:27.022198 | orchestrator | Wednesday 11 March 2026 00:51:05 +0000 (0:00:00.330) 0:01:03.326 ******* 2026-03-11 00:52:27.022203 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.022208 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.022213 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.022217 | orchestrator | 2026-03-11 00:52:27.022222 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-11 00:52:27.022227 | orchestrator | Wednesday 11 March 2026 00:51:06 +0000 (0:00:00.434) 0:01:03.760 ******* 2026-03-11 00:52:27.022232 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.022236 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.022241 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.022246 | orchestrator | 2026-03-11 00:52:27.022251 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-11 00:52:27.022255 | orchestrator | Wednesday 11 March 2026 00:51:06 +0000 (0:00:00.397) 0:01:04.158 ******* 2026-03-11 00:52:27.022260 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.022265 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.022270 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.022274 | orchestrator | 2026-03-11 00:52:27.022279 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-11 00:52:27.022284 | orchestrator | Wednesday 11 March 2026 00:51:07 +0000 (0:00:00.610) 0:01:04.768 ******* 2026-03-11 00:52:27.022289 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.022293 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.022298 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.022303 | orchestrator | 2026-03-11 00:52:27.022308 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-11 00:52:27.022313 | orchestrator | Wednesday 11 March 2026 00:51:07 +0000 (0:00:00.523) 0:01:05.292 ******* 2026-03-11 00:52:27.022317 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.022322 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.022327 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.022332 | orchestrator | 2026-03-11 00:52:27.022337 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-11 00:52:27.022342 | orchestrator | Wednesday 11 March 2026 00:51:08 +0000 (0:00:00.277) 0:01:05.570 ******* 2026-03-11 00:52:27.022346 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.022351 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.022356 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.022361 | orchestrator | 2026-03-11 00:52:27.022366 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-11 00:52:27.022370 | orchestrator | Wednesday 11 March 2026 00:51:08 +0000 (0:00:00.263) 0:01:05.833 ******* 2026-03-11 00:52:27.022379 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.022387 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.022396 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.022404 | orchestrator | 2026-03-11 00:52:27.022412 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-11 00:52:27.022420 | orchestrator | Wednesday 11 March 2026 00:51:08 +0000 (0:00:00.399) 0:01:06.233 ******* 2026-03-11 00:52:27.022434 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.022443 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.022452 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.022461 | orchestrator | 2026-03-11 00:52:27.022469 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-11 00:52:27.022478 | orchestrator | Wednesday 11 March 2026 00:51:09 +0000 (0:00:00.279) 0:01:06.512 ******* 2026-03-11 00:52:27.022487 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.022495 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.022504 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.022513 | orchestrator | 2026-03-11 00:52:27.022523 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-11 00:52:27.022531 | orchestrator | Wednesday 11 March 2026 00:51:09 +0000 (0:00:00.222) 0:01:06.735 ******* 2026-03-11 00:52:27.022540 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.022549 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.022557 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.022566 | orchestrator | 2026-03-11 00:52:27.022572 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-11 00:52:27.022577 | orchestrator | Wednesday 11 March 2026 00:51:09 +0000 (0:00:00.228) 0:01:06.964 ******* 2026-03-11 00:52:27.022585 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.022593 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.022602 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.022610 | orchestrator | 2026-03-11 00:52:27.022619 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-11 00:52:27.022627 | orchestrator | Wednesday 11 March 2026 00:51:10 +0000 (0:00:00.373) 0:01:07.337 ******* 2026-03-11 00:52:27.022635 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.022643 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.022652 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.022660 | orchestrator | 2026-03-11 00:52:27.022666 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-11 00:52:27.022671 | orchestrator | Wednesday 11 March 2026 00:51:10 +0000 (0:00:00.244) 0:01:07.581 ******* 2026-03-11 00:52:27.022679 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.022687 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.022696 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.022704 | orchestrator | 2026-03-11 00:52:27.022718 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-11 00:52:27.022727 | orchestrator | Wednesday 11 March 2026 00:51:10 +0000 (0:00:00.288) 0:01:07.869 ******* 2026-03-11 00:52:27.022735 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.022743 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.022751 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.022759 | orchestrator | 2026-03-11 00:52:27.022768 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-11 00:52:27.022776 | orchestrator | Wednesday 11 March 2026 00:51:10 +0000 (0:00:00.247) 0:01:08.117 ******* 2026-03-11 00:52:27.022784 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.022792 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.022800 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.022809 | orchestrator | 2026-03-11 00:52:27.022817 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-11 00:52:27.022825 | orchestrator | Wednesday 11 March 2026 00:51:11 +0000 (0:00:00.275) 0:01:08.393 ******* 2026-03-11 00:52:27.022833 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:52:27.022841 | orchestrator | 2026-03-11 00:52:27.022849 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-11 00:52:27.022858 | orchestrator | Wednesday 11 March 2026 00:51:11 +0000 (0:00:00.633) 0:01:09.026 ******* 2026-03-11 00:52:27.022879 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.022888 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.022903 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.022911 | orchestrator | 2026-03-11 00:52:27.022919 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-11 00:52:27.022927 | orchestrator | Wednesday 11 March 2026 00:51:12 +0000 (0:00:00.387) 0:01:09.414 ******* 2026-03-11 00:52:27.022935 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.022943 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.022951 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.022959 | orchestrator | 2026-03-11 00:52:27.023024 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-11 00:52:27.023043 | orchestrator | Wednesday 11 March 2026 00:51:12 +0000 (0:00:00.399) 0:01:09.813 ******* 2026-03-11 00:52:27.023052 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.023061 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.023070 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.023078 | orchestrator | 2026-03-11 00:52:27.023086 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-11 00:52:27.023094 | orchestrator | Wednesday 11 March 2026 00:51:12 +0000 (0:00:00.440) 0:01:10.254 ******* 2026-03-11 00:52:27.023102 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.023110 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.023118 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.023126 | orchestrator | 2026-03-11 00:52:27.023134 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-11 00:52:27.023142 | orchestrator | Wednesday 11 March 2026 00:51:13 +0000 (0:00:00.311) 0:01:10.565 ******* 2026-03-11 00:52:27.023151 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.023159 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.023167 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.023176 | orchestrator | 2026-03-11 00:52:27.023181 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-11 00:52:27.023186 | orchestrator | Wednesday 11 March 2026 00:51:13 +0000 (0:00:00.299) 0:01:10.865 ******* 2026-03-11 00:52:27.023192 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.023200 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.023208 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.023217 | orchestrator | 2026-03-11 00:52:27.023227 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-11 00:52:27.023239 | orchestrator | Wednesday 11 March 2026 00:51:13 +0000 (0:00:00.282) 0:01:11.147 ******* 2026-03-11 00:52:27.023247 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.023255 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.023263 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.023271 | orchestrator | 2026-03-11 00:52:27.023278 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-11 00:52:27.023286 | orchestrator | Wednesday 11 March 2026 00:51:14 +0000 (0:00:00.481) 0:01:11.628 ******* 2026-03-11 00:52:27.023294 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.023302 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.023328 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.023337 | orchestrator | 2026-03-11 00:52:27.023345 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-11 00:52:27.023353 | orchestrator | Wednesday 11 March 2026 00:51:14 +0000 (0:00:00.291) 0:01:11.920 ******* 2026-03-11 00:52:27.023363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023456 | orchestrator | 2026-03-11 00:52:27.023461 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-11 00:52:27.023466 | orchestrator | Wednesday 11 March 2026 00:51:16 +0000 (0:00:01.566) 0:01:13.486 ******* 2026-03-11 00:52:27.023471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023525 | orchestrator | 2026-03-11 00:52:27.023530 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-11 00:52:27.023535 | orchestrator | Wednesday 11 March 2026 00:51:21 +0000 (0:00:04.928) 0:01:18.415 ******* 2026-03-11 00:52:27.023540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.023595 | orchestrator | 2026-03-11 00:52:27.023603 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-11 00:52:27.023611 | orchestrator | Wednesday 11 March 2026 00:51:23 +0000 (0:00:02.717) 0:01:21.132 ******* 2026-03-11 00:52:27.023622 | orchestrator | 2026-03-11 00:52:27.023631 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-11 00:52:27.023639 | orchestrator | Wednesday 11 March 2026 00:51:23 +0000 (0:00:00.065) 0:01:21.198 ******* 2026-03-11 00:52:27.023646 | orchestrator | 2026-03-11 00:52:27.023654 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-11 00:52:27.023666 | orchestrator | Wednesday 11 March 2026 00:51:23 +0000 (0:00:00.063) 0:01:21.262 ******* 2026-03-11 00:52:27.023674 | orchestrator | 2026-03-11 00:52:27.023682 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-11 00:52:27.023690 | orchestrator | Wednesday 11 March 2026 00:51:24 +0000 (0:00:00.068) 0:01:21.330 ******* 2026-03-11 00:52:27.023698 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:52:27.023706 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:52:27.023714 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:52:27.023722 | orchestrator | 2026-03-11 00:52:27.023730 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-11 00:52:27.023739 | orchestrator | Wednesday 11 March 2026 00:51:31 +0000 (0:00:07.647) 0:01:28.977 ******* 2026-03-11 00:52:27.023747 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:52:27.023755 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:52:27.023763 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:52:27.023771 | orchestrator | 2026-03-11 00:52:27.023780 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-11 00:52:27.023788 | orchestrator | Wednesday 11 March 2026 00:51:38 +0000 (0:00:07.239) 0:01:36.217 ******* 2026-03-11 00:52:27.023797 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:52:27.023803 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:52:27.023808 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:52:27.023812 | orchestrator | 2026-03-11 00:52:27.023817 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-11 00:52:27.023822 | orchestrator | Wednesday 11 March 2026 00:51:47 +0000 (0:00:08.195) 0:01:44.413 ******* 2026-03-11 00:52:27.023827 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.023831 | orchestrator | 2026-03-11 00:52:27.023838 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-11 00:52:27.023847 | orchestrator | Wednesday 11 March 2026 00:51:47 +0000 (0:00:00.116) 0:01:44.529 ******* 2026-03-11 00:52:27.023855 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.023885 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.023894 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.023902 | orchestrator | 2026-03-11 00:52:27.023916 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-11 00:52:27.023930 | orchestrator | Wednesday 11 March 2026 00:51:48 +0000 (0:00:00.958) 0:01:45.487 ******* 2026-03-11 00:52:27.023938 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.023946 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.023954 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:52:27.023963 | orchestrator | 2026-03-11 00:52:27.023971 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-11 00:52:27.023978 | orchestrator | Wednesday 11 March 2026 00:51:48 +0000 (0:00:00.616) 0:01:46.103 ******* 2026-03-11 00:52:27.023987 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.023995 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.024004 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.024012 | orchestrator | 2026-03-11 00:52:27.024020 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-11 00:52:27.024028 | orchestrator | Wednesday 11 March 2026 00:51:49 +0000 (0:00:00.726) 0:01:46.829 ******* 2026-03-11 00:52:27.024036 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.024044 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.024053 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:52:27.024061 | orchestrator | 2026-03-11 00:52:27.024069 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-11 00:52:27.024077 | orchestrator | Wednesday 11 March 2026 00:51:50 +0000 (0:00:00.924) 0:01:47.754 ******* 2026-03-11 00:52:27.024085 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.024093 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.024101 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.024110 | orchestrator | 2026-03-11 00:52:27.024124 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-11 00:52:27.024132 | orchestrator | Wednesday 11 March 2026 00:51:51 +0000 (0:00:00.762) 0:01:48.516 ******* 2026-03-11 00:52:27.024140 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.024149 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.024156 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.024164 | orchestrator | 2026-03-11 00:52:27.024173 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-11 00:52:27.024180 | orchestrator | Wednesday 11 March 2026 00:51:51 +0000 (0:00:00.782) 0:01:49.299 ******* 2026-03-11 00:52:27.024189 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.024197 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.024206 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.024214 | orchestrator | 2026-03-11 00:52:27.024222 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-11 00:52:27.024230 | orchestrator | Wednesday 11 March 2026 00:51:52 +0000 (0:00:00.305) 0:01:49.604 ******* 2026-03-11 00:52:27.024238 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024247 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024256 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024384 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024396 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024406 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024424 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024438 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024446 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024454 | orchestrator | 2026-03-11 00:52:27.024463 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-11 00:52:27.024471 | orchestrator | Wednesday 11 March 2026 00:51:53 +0000 (0:00:01.348) 0:01:50.952 ******* 2026-03-11 00:52:27.024480 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024488 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024497 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024505 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024538 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024569 | orchestrator | 2026-03-11 00:52:27.024578 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-11 00:52:27.024586 | orchestrator | Wednesday 11 March 2026 00:51:57 +0000 (0:00:03.794) 0:01:54.746 ******* 2026-03-11 00:52:27.024593 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024598 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024603 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024613 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024642 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:52:27.024652 | orchestrator | 2026-03-11 00:52:27.024657 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-11 00:52:27.024662 | orchestrator | Wednesday 11 March 2026 00:52:00 +0000 (0:00:02.748) 0:01:57.495 ******* 2026-03-11 00:52:27.024667 | orchestrator | 2026-03-11 00:52:27.024672 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-11 00:52:27.024677 | orchestrator | Wednesday 11 March 2026 00:52:00 +0000 (0:00:00.072) 0:01:57.567 ******* 2026-03-11 00:52:27.024681 | orchestrator | 2026-03-11 00:52:27.024686 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-11 00:52:27.024691 | orchestrator | Wednesday 11 March 2026 00:52:00 +0000 (0:00:00.076) 0:01:57.644 ******* 2026-03-11 00:52:27.024696 | orchestrator | 2026-03-11 00:52:27.024700 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-11 00:52:27.024705 | orchestrator | Wednesday 11 March 2026 00:52:00 +0000 (0:00:00.071) 0:01:57.716 ******* 2026-03-11 00:52:27.024710 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:52:27.024715 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:52:27.024720 | orchestrator | 2026-03-11 00:52:27.024725 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-11 00:52:27.024729 | orchestrator | Wednesday 11 March 2026 00:52:06 +0000 (0:00:06.464) 0:02:04.180 ******* 2026-03-11 00:52:27.024734 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:52:27.024739 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:52:27.024744 | orchestrator | 2026-03-11 00:52:27.024749 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-11 00:52:27.024753 | orchestrator | Wednesday 11 March 2026 00:52:13 +0000 (0:00:06.149) 0:02:10.329 ******* 2026-03-11 00:52:27.024758 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:52:27.024763 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:52:27.024768 | orchestrator | 2026-03-11 00:52:27.024772 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-11 00:52:27.024777 | orchestrator | Wednesday 11 March 2026 00:52:19 +0000 (0:00:06.674) 0:02:17.004 ******* 2026-03-11 00:52:27.024782 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:52:27.024789 | orchestrator | 2026-03-11 00:52:27.024798 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-11 00:52:27.024806 | orchestrator | Wednesday 11 March 2026 00:52:19 +0000 (0:00:00.136) 0:02:17.140 ******* 2026-03-11 00:52:27.024814 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.024822 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.024830 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.024839 | orchestrator | 2026-03-11 00:52:27.024847 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-11 00:52:27.024855 | orchestrator | Wednesday 11 March 2026 00:52:20 +0000 (0:00:00.978) 0:02:18.118 ******* 2026-03-11 00:52:27.024880 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.024886 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.024894 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:52:27.024908 | orchestrator | 2026-03-11 00:52:27.024916 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-11 00:52:27.024924 | orchestrator | Wednesday 11 March 2026 00:52:21 +0000 (0:00:00.768) 0:02:18.887 ******* 2026-03-11 00:52:27.024933 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.024941 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.024950 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.024958 | orchestrator | 2026-03-11 00:52:27.024966 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-11 00:52:27.024973 | orchestrator | Wednesday 11 March 2026 00:52:22 +0000 (0:00:00.756) 0:02:19.643 ******* 2026-03-11 00:52:27.024978 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:52:27.024983 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:52:27.024989 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:52:27.024997 | orchestrator | 2026-03-11 00:52:27.025005 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-11 00:52:27.025013 | orchestrator | Wednesday 11 March 2026 00:52:22 +0000 (0:00:00.568) 0:02:20.212 ******* 2026-03-11 00:52:27.025023 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.025031 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.025039 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.025048 | orchestrator | 2026-03-11 00:52:27.025057 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-11 00:52:27.025065 | orchestrator | Wednesday 11 March 2026 00:52:23 +0000 (0:00:00.978) 0:02:21.190 ******* 2026-03-11 00:52:27.025074 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:52:27.025083 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:52:27.025091 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:52:27.025099 | orchestrator | 2026-03-11 00:52:27.025107 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:52:27.025116 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-11 00:52:27.025125 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-11 00:52:27.025142 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-11 00:52:27.025151 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:52:27.025159 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:52:27.025167 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:52:27.025176 | orchestrator | 2026-03-11 00:52:27.025185 | orchestrator | 2026-03-11 00:52:27.025193 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:52:27.025202 | orchestrator | Wednesday 11 March 2026 00:52:25 +0000 (0:00:01.180) 0:02:22.371 ******* 2026-03-11 00:52:27.025210 | orchestrator | =============================================================================== 2026-03-11 00:52:27.025219 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 26.67s 2026-03-11 00:52:27.025227 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.55s 2026-03-11 00:52:27.025235 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.87s 2026-03-11 00:52:27.025243 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.11s 2026-03-11 00:52:27.025252 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.39s 2026-03-11 00:52:27.025260 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.93s 2026-03-11 00:52:27.025268 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.79s 2026-03-11 00:52:27.025283 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.75s 2026-03-11 00:52:27.025291 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.72s 2026-03-11 00:52:27.025299 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.38s 2026-03-11 00:52:27.025307 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.06s 2026-03-11 00:52:27.025315 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.03s 2026-03-11 00:52:27.025323 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.67s 2026-03-11 00:52:27.025331 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.60s 2026-03-11 00:52:27.025340 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.57s 2026-03-11 00:52:27.025348 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.35s 2026-03-11 00:52:27.025356 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.32s 2026-03-11 00:52:27.025364 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.25s 2026-03-11 00:52:27.025372 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.18s 2026-03-11 00:52:27.025380 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.18s 2026-03-11 00:52:27.025388 | orchestrator | 2026-03-11 00:52:27 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:27.025397 | orchestrator | 2026-03-11 00:52:27 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:27.025405 | orchestrator | 2026-03-11 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:30.061619 | orchestrator | 2026-03-11 00:52:30 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:30.063521 | orchestrator | 2026-03-11 00:52:30 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:30.064160 | orchestrator | 2026-03-11 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:33.137634 | orchestrator | 2026-03-11 00:52:33 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:33.138998 | orchestrator | 2026-03-11 00:52:33 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:33.139046 | orchestrator | 2026-03-11 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:36.168291 | orchestrator | 2026-03-11 00:52:36 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:36.169345 | orchestrator | 2026-03-11 00:52:36 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:36.169469 | orchestrator | 2026-03-11 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:39.200418 | orchestrator | 2026-03-11 00:52:39 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:39.200617 | orchestrator | 2026-03-11 00:52:39 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:39.200642 | orchestrator | 2026-03-11 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:42.239438 | orchestrator | 2026-03-11 00:52:42 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:42.241123 | orchestrator | 2026-03-11 00:52:42 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:42.241619 | orchestrator | 2026-03-11 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:45.282293 | orchestrator | 2026-03-11 00:52:45 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:45.287800 | orchestrator | 2026-03-11 00:52:45 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:45.287948 | orchestrator | 2026-03-11 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:48.327444 | orchestrator | 2026-03-11 00:52:48 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:48.330058 | orchestrator | 2026-03-11 00:52:48 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:48.330107 | orchestrator | 2026-03-11 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:51.371734 | orchestrator | 2026-03-11 00:52:51 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:51.375898 | orchestrator | 2026-03-11 00:52:51 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:51.375956 | orchestrator | 2026-03-11 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:54.424267 | orchestrator | 2026-03-11 00:52:54 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:54.427079 | orchestrator | 2026-03-11 00:52:54 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:54.427163 | orchestrator | 2026-03-11 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:57.470574 | orchestrator | 2026-03-11 00:52:57 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:52:57.470678 | orchestrator | 2026-03-11 00:52:57 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:52:57.470690 | orchestrator | 2026-03-11 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:00.514092 | orchestrator | 2026-03-11 00:53:00 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:00.515542 | orchestrator | 2026-03-11 00:53:00 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:00.515631 | orchestrator | 2026-03-11 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:03.554442 | orchestrator | 2026-03-11 00:53:03 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:03.555218 | orchestrator | 2026-03-11 00:53:03 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:03.555548 | orchestrator | 2026-03-11 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:06.593768 | orchestrator | 2026-03-11 00:53:06 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:06.593877 | orchestrator | 2026-03-11 00:53:06 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:06.593885 | orchestrator | 2026-03-11 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:09.633559 | orchestrator | 2026-03-11 00:53:09 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:09.635161 | orchestrator | 2026-03-11 00:53:09 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:09.635219 | orchestrator | 2026-03-11 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:12.671536 | orchestrator | 2026-03-11 00:53:12 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:12.671626 | orchestrator | 2026-03-11 00:53:12 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:12.671635 | orchestrator | 2026-03-11 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:15.713674 | orchestrator | 2026-03-11 00:53:15 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:15.717397 | orchestrator | 2026-03-11 00:53:15 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:15.717890 | orchestrator | 2026-03-11 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:18.774704 | orchestrator | 2026-03-11 00:53:18 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:18.777253 | orchestrator | 2026-03-11 00:53:18 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:18.777746 | orchestrator | 2026-03-11 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:21.830260 | orchestrator | 2026-03-11 00:53:21 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:21.832524 | orchestrator | 2026-03-11 00:53:21 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:21.832598 | orchestrator | 2026-03-11 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:24.885488 | orchestrator | 2026-03-11 00:53:24 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:24.887533 | orchestrator | 2026-03-11 00:53:24 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:24.887631 | orchestrator | 2026-03-11 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:27.934934 | orchestrator | 2026-03-11 00:53:27 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:27.935547 | orchestrator | 2026-03-11 00:53:27 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:27.935579 | orchestrator | 2026-03-11 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:30.982510 | orchestrator | 2026-03-11 00:53:30 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:30.984381 | orchestrator | 2026-03-11 00:53:30 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:30.984437 | orchestrator | 2026-03-11 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:34.033976 | orchestrator | 2026-03-11 00:53:34 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:34.038923 | orchestrator | 2026-03-11 00:53:34 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:34.039006 | orchestrator | 2026-03-11 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:37.084305 | orchestrator | 2026-03-11 00:53:37 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:37.084435 | orchestrator | 2026-03-11 00:53:37 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:37.084444 | orchestrator | 2026-03-11 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:40.139145 | orchestrator | 2026-03-11 00:53:40 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:40.140506 | orchestrator | 2026-03-11 00:53:40 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:40.140545 | orchestrator | 2026-03-11 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:43.187061 | orchestrator | 2026-03-11 00:53:43 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:43.189741 | orchestrator | 2026-03-11 00:53:43 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:43.189851 | orchestrator | 2026-03-11 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:46.233125 | orchestrator | 2026-03-11 00:53:46 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:46.233210 | orchestrator | 2026-03-11 00:53:46 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:46.233230 | orchestrator | 2026-03-11 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:49.276219 | orchestrator | 2026-03-11 00:53:49 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:49.278424 | orchestrator | 2026-03-11 00:53:49 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:49.278480 | orchestrator | 2026-03-11 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:52.319227 | orchestrator | 2026-03-11 00:53:52 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:52.320785 | orchestrator | 2026-03-11 00:53:52 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:52.320946 | orchestrator | 2026-03-11 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:55.350903 | orchestrator | 2026-03-11 00:53:55 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:55.351012 | orchestrator | 2026-03-11 00:53:55 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:55.351022 | orchestrator | 2026-03-11 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:58.393073 | orchestrator | 2026-03-11 00:53:58 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:53:58.394691 | orchestrator | 2026-03-11 00:53:58 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:53:58.394785 | orchestrator | 2026-03-11 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:01.430467 | orchestrator | 2026-03-11 00:54:01 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:01.433206 | orchestrator | 2026-03-11 00:54:01 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:01.433269 | orchestrator | 2026-03-11 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:04.482042 | orchestrator | 2026-03-11 00:54:04 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:04.482945 | orchestrator | 2026-03-11 00:54:04 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:04.482987 | orchestrator | 2026-03-11 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:07.526614 | orchestrator | 2026-03-11 00:54:07 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:07.529210 | orchestrator | 2026-03-11 00:54:07 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:07.529362 | orchestrator | 2026-03-11 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:10.584908 | orchestrator | 2026-03-11 00:54:10 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:10.585661 | orchestrator | 2026-03-11 00:54:10 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:10.585706 | orchestrator | 2026-03-11 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:13.629176 | orchestrator | 2026-03-11 00:54:13 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:13.629741 | orchestrator | 2026-03-11 00:54:13 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:13.631033 | orchestrator | 2026-03-11 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:16.661396 | orchestrator | 2026-03-11 00:54:16 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:16.661512 | orchestrator | 2026-03-11 00:54:16 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:16.661528 | orchestrator | 2026-03-11 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:19.702626 | orchestrator | 2026-03-11 00:54:19 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:19.705746 | orchestrator | 2026-03-11 00:54:19 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:19.705832 | orchestrator | 2026-03-11 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:22.743631 | orchestrator | 2026-03-11 00:54:22 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:22.747875 | orchestrator | 2026-03-11 00:54:22 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:22.747966 | orchestrator | 2026-03-11 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:25.799271 | orchestrator | 2026-03-11 00:54:25 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:25.800107 | orchestrator | 2026-03-11 00:54:25 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:25.800136 | orchestrator | 2026-03-11 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:28.835375 | orchestrator | 2026-03-11 00:54:28 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:28.835455 | orchestrator | 2026-03-11 00:54:28 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:28.835762 | orchestrator | 2026-03-11 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:31.879116 | orchestrator | 2026-03-11 00:54:31 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:31.881107 | orchestrator | 2026-03-11 00:54:31 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:31.881503 | orchestrator | 2026-03-11 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:34.916356 | orchestrator | 2026-03-11 00:54:34 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:34.917224 | orchestrator | 2026-03-11 00:54:34 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:34.917422 | orchestrator | 2026-03-11 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:37.965655 | orchestrator | 2026-03-11 00:54:37 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:37.967518 | orchestrator | 2026-03-11 00:54:37 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:37.967570 | orchestrator | 2026-03-11 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:41.004154 | orchestrator | 2026-03-11 00:54:41 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:41.004252 | orchestrator | 2026-03-11 00:54:41 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:41.004263 | orchestrator | 2026-03-11 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:44.043164 | orchestrator | 2026-03-11 00:54:44 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:44.044653 | orchestrator | 2026-03-11 00:54:44 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:44.044700 | orchestrator | 2026-03-11 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:47.083984 | orchestrator | 2026-03-11 00:54:47 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:47.085616 | orchestrator | 2026-03-11 00:54:47 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:47.085651 | orchestrator | 2026-03-11 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:50.119839 | orchestrator | 2026-03-11 00:54:50 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:50.121329 | orchestrator | 2026-03-11 00:54:50 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:50.121516 | orchestrator | 2026-03-11 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:53.155091 | orchestrator | 2026-03-11 00:54:53 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:53.157405 | orchestrator | 2026-03-11 00:54:53 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:53.157455 | orchestrator | 2026-03-11 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:56.204962 | orchestrator | 2026-03-11 00:54:56 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:56.206695 | orchestrator | 2026-03-11 00:54:56 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:56.207046 | orchestrator | 2026-03-11 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:59.246464 | orchestrator | 2026-03-11 00:54:59 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:54:59.247406 | orchestrator | 2026-03-11 00:54:59 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:54:59.247444 | orchestrator | 2026-03-11 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:02.283295 | orchestrator | 2026-03-11 00:55:02 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state STARTED 2026-03-11 00:55:02.283834 | orchestrator | 2026-03-11 00:55:02 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:02.283929 | orchestrator | 2026-03-11 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:05.333225 | orchestrator | 2026-03-11 00:55:05 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:05.340807 | orchestrator | 2026-03-11 00:55:05 | INFO  | Task 435cafc8-982e-48df-b069-03b64453af9b is in state SUCCESS 2026-03-11 00:55:05.341569 | orchestrator | 2026-03-11 00:55:05.341607 | orchestrator | 2026-03-11 00:55:05.341615 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:55:05.341623 | orchestrator | 2026-03-11 00:55:05.341630 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:55:05.341636 | orchestrator | Wednesday 11 March 2026 00:48:52 +0000 (0:00:00.397) 0:00:00.397 ******* 2026-03-11 00:55:05.341643 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.341650 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.341656 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.341663 | orchestrator | 2026-03-11 00:55:05.341678 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:55:05.341683 | orchestrator | Wednesday 11 March 2026 00:48:52 +0000 (0:00:00.449) 0:00:00.847 ******* 2026-03-11 00:55:05.341697 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-11 00:55:05.341701 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-11 00:55:05.341705 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-11 00:55:05.341709 | orchestrator | 2026-03-11 00:55:05.341713 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-11 00:55:05.341716 | orchestrator | 2026-03-11 00:55:05.341720 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-11 00:55:05.341724 | orchestrator | Wednesday 11 March 2026 00:48:53 +0000 (0:00:00.952) 0:00:01.799 ******* 2026-03-11 00:55:05.341728 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.341733 | orchestrator | 2026-03-11 00:55:05.341740 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-11 00:55:05.341749 | orchestrator | Wednesday 11 March 2026 00:48:55 +0000 (0:00:01.263) 0:00:03.063 ******* 2026-03-11 00:55:05.341756 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.341776 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.341783 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.341789 | orchestrator | 2026-03-11 00:55:05.341796 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-11 00:55:05.341802 | orchestrator | Wednesday 11 March 2026 00:48:56 +0000 (0:00:01.148) 0:00:04.213 ******* 2026-03-11 00:55:05.341809 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.341816 | orchestrator | 2026-03-11 00:55:05.341820 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-11 00:55:05.341824 | orchestrator | Wednesday 11 March 2026 00:48:57 +0000 (0:00:01.218) 0:00:05.431 ******* 2026-03-11 00:55:05.341828 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.341831 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.341835 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.341839 | orchestrator | 2026-03-11 00:55:05.341843 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-11 00:55:05.341847 | orchestrator | Wednesday 11 March 2026 00:48:59 +0000 (0:00:01.714) 0:00:07.146 ******* 2026-03-11 00:55:05.341851 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-11 00:55:05.341896 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-11 00:55:05.341901 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-11 00:55:05.341905 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-11 00:55:05.341908 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-11 00:55:05.341912 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-11 00:55:05.341916 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-11 00:55:05.341920 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-11 00:55:05.341924 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-11 00:55:05.341928 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-11 00:55:05.341932 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-11 00:55:05.341936 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-11 00:55:05.341939 | orchestrator | 2026-03-11 00:55:05.341983 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-11 00:55:05.341989 | orchestrator | Wednesday 11 March 2026 00:49:02 +0000 (0:00:03.295) 0:00:10.441 ******* 2026-03-11 00:55:05.342000 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-11 00:55:05.342004 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-11 00:55:05.342008 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-11 00:55:05.342066 | orchestrator | 2026-03-11 00:55:05.342072 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-11 00:55:05.342076 | orchestrator | Wednesday 11 March 2026 00:49:03 +0000 (0:00:01.391) 0:00:11.833 ******* 2026-03-11 00:55:05.342082 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-11 00:55:05.342089 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-11 00:55:05.342099 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-11 00:55:05.342106 | orchestrator | 2026-03-11 00:55:05.342112 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-11 00:55:05.342118 | orchestrator | Wednesday 11 March 2026 00:49:05 +0000 (0:00:01.540) 0:00:13.373 ******* 2026-03-11 00:55:05.342124 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-11 00:55:05.342131 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.342148 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-11 00:55:05.342156 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.342163 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-11 00:55:05.342171 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.342177 | orchestrator | 2026-03-11 00:55:05.342185 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-11 00:55:05.342192 | orchestrator | Wednesday 11 March 2026 00:49:06 +0000 (0:00:00.759) 0:00:14.133 ******* 2026-03-11 00:55:05.342203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:55:05.342424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:55:05.342429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:55:05.342439 | orchestrator | 2026-03-11 00:55:05.342443 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-11 00:55:05.342448 | orchestrator | Wednesday 11 March 2026 00:49:08 +0000 (0:00:02.595) 0:00:16.728 ******* 2026-03-11 00:55:05.342452 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.342457 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.342462 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.342466 | orchestrator | 2026-03-11 00:55:05.342470 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-11 00:55:05.342475 | orchestrator | Wednesday 11 March 2026 00:49:10 +0000 (0:00:01.408) 0:00:18.137 ******* 2026-03-11 00:55:05.342479 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-11 00:55:05.342487 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-11 00:55:05.342491 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-11 00:55:05.342496 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-11 00:55:05.342500 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-11 00:55:05.342505 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-11 00:55:05.342509 | orchestrator | 2026-03-11 00:55:05.342514 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-11 00:55:05.342518 | orchestrator | Wednesday 11 March 2026 00:49:12 +0000 (0:00:02.498) 0:00:20.635 ******* 2026-03-11 00:55:05.342523 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.342527 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.342532 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.342536 | orchestrator | 2026-03-11 00:55:05.342541 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-11 00:55:05.342546 | orchestrator | Wednesday 11 March 2026 00:49:14 +0000 (0:00:01.468) 0:00:22.103 ******* 2026-03-11 00:55:05.342550 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.342554 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.342559 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.342563 | orchestrator | 2026-03-11 00:55:05.342567 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-11 00:55:05.342572 | orchestrator | Wednesday 11 March 2026 00:49:16 +0000 (0:00:02.778) 0:00:24.882 ******* 2026-03-11 00:55:05.342576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.342584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.342589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.342594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c4467d663b81ccd8307bb4808cf61286ad4e92a5', '__omit_place_holder__c4467d663b81ccd8307bb4808cf61286ad4e92a5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-11 00:55:05.342601 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.342641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.342652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.342656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.342664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c4467d663b81ccd8307bb4808cf61286ad4e92a5', '__omit_place_holder__c4467d663b81ccd8307bb4808cf61286ad4e92a5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-11 00:55:05.342668 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.342675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.342679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.342689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.342693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c4467d663b81ccd8307bb4808cf61286ad4e92a5', '__omit_place_holder__c4467d663b81ccd8307bb4808cf61286ad4e92a5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-11 00:55:05.342697 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.342701 | orchestrator | 2026-03-11 00:55:05.342705 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-11 00:55:05.342709 | orchestrator | Wednesday 11 March 2026 00:49:17 +0000 (0:00:00.848) 0:00:25.730 ******* 2026-03-11 00:55:05.342713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.342748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c4467d663b81ccd8307bb4808cf61286ad4e92a5', '__omit_place_holder__c4467d663b81ccd8307bb4808cf61286ad4e92a5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-11 00:55:05.342758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.342788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c4467d663b81ccd8307bb4808cf61286ad4e92a5', '__omit_place_holder__c4467d663b81ccd8307bb4808cf61286ad4e92a5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-11 00:55:05.342799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.342863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c4467d663b81ccd8307bb4808cf61286ad4e92a5', '__omit_place_holder__c4467d663b81ccd8307bb4808cf61286ad4e92a5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-11 00:55:05.342867 | orchestrator | 2026-03-11 00:55:05.342871 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-11 00:55:05.342875 | orchestrator | Wednesday 11 March 2026 00:49:21 +0000 (0:00:03.468) 0:00:29.199 ******* 2026-03-11 00:55:05.342879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:55:05.342913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:55:05.342918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:55:05.342923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:55:05.343012 | orchestrator | 2026-03-11 00:55:05.343020 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-11 00:55:05.343024 | orchestrator | Wednesday 11 March 2026 00:49:25 +0000 (0:00:04.012) 0:00:33.211 ******* 2026-03-11 00:55:05.343028 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-11 00:55:05.343036 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-11 00:55:05.343040 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-11 00:55:05.343048 | orchestrator | 2026-03-11 00:55:05.343052 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-11 00:55:05.343056 | orchestrator | Wednesday 11 March 2026 00:49:28 +0000 (0:00:03.081) 0:00:36.293 ******* 2026-03-11 00:55:05.343062 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-11 00:55:05.343066 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-11 00:55:05.343070 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-11 00:55:05.343074 | orchestrator | 2026-03-11 00:55:05.343078 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-11 00:55:05.343082 | orchestrator | Wednesday 11 March 2026 00:49:33 +0000 (0:00:04.958) 0:00:41.251 ******* 2026-03-11 00:55:05.343085 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.343089 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.343093 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.343097 | orchestrator | 2026-03-11 00:55:05.343101 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-11 00:55:05.343105 | orchestrator | Wednesday 11 March 2026 00:49:33 +0000 (0:00:00.509) 0:00:41.761 ******* 2026-03-11 00:55:05.343108 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-11 00:55:05.343113 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-11 00:55:05.343116 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-11 00:55:05.343120 | orchestrator | 2026-03-11 00:55:05.343124 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-11 00:55:05.343128 | orchestrator | Wednesday 11 March 2026 00:49:36 +0000 (0:00:02.577) 0:00:44.339 ******* 2026-03-11 00:55:05.343132 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-11 00:55:05.343136 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-11 00:55:05.343139 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-11 00:55:05.343143 | orchestrator | 2026-03-11 00:55:05.343148 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-11 00:55:05.343152 | orchestrator | Wednesday 11 March 2026 00:49:39 +0000 (0:00:02.854) 0:00:47.194 ******* 2026-03-11 00:55:05.343155 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-11 00:55:05.343159 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-11 00:55:05.343163 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-11 00:55:05.343167 | orchestrator | 2026-03-11 00:55:05.343171 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-11 00:55:05.343174 | orchestrator | Wednesday 11 March 2026 00:49:40 +0000 (0:00:01.334) 0:00:48.528 ******* 2026-03-11 00:55:05.343178 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-11 00:55:05.343182 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-11 00:55:05.343186 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-11 00:55:05.343190 | orchestrator | 2026-03-11 00:55:05.343195 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-11 00:55:05.343201 | orchestrator | Wednesday 11 March 2026 00:49:42 +0000 (0:00:01.633) 0:00:50.162 ******* 2026-03-11 00:55:05.343211 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.343218 | orchestrator | 2026-03-11 00:55:05.343224 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-11 00:55:05.343235 | orchestrator | Wednesday 11 March 2026 00:49:42 +0000 (0:00:00.708) 0:00:50.870 ******* 2026-03-11 00:55:05.343241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-11 00:55:05.343252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-11 00:55:05.343262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-11 00:55:05.343269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:55:05.343276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:55:05.343283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:55:05.343294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:55:05.343301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:55:05.343317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:55:05.343325 | orchestrator | 2026-03-11 00:55:05.343332 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-11 00:55:05.343339 | orchestrator | Wednesday 11 March 2026 00:49:45 +0000 (0:00:02.931) 0:00:53.802 ******* 2026-03-11 00:55:05.343345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.343353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.343360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.343367 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.343374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.343383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.343391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.343395 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.343402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.343406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.343410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.343414 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.343418 | orchestrator | 2026-03-11 00:55:05.343422 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-11 00:55:05.343426 | orchestrator | Wednesday 11 March 2026 00:49:46 +0000 (0:00:00.533) 0:00:54.336 ******* 2026-03-11 00:55:05.343433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.343443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.343456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.343462 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.343471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.343477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.343484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.343490 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.343496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.343508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.343514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.343521 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.343527 | orchestrator | 2026-03-11 00:55:05.343751 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-11 00:55:05.343801 | orchestrator | Wednesday 11 March 2026 00:49:47 +0000 (0:00:00.739) 0:00:55.075 ******* 2026-03-11 00:55:05.343817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.343825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.343832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.343843 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.343850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.343856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.343863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.343869 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.343878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.343887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.343893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.343899 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.343906 | orchestrator | 2026-03-11 00:55:05.343912 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-11 00:55:05.343921 | orchestrator | Wednesday 11 March 2026 00:49:47 +0000 (0:00:00.837) 0:00:55.913 ******* 2026-03-11 00:55:05.343928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.343935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.343941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.343948 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.343955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.343968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.343975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.343986 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.343991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.343995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.343999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.344003 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.344007 | orchestrator | 2026-03-11 00:55:05.344011 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-11 00:55:05.344015 | orchestrator | Wednesday 11 March 2026 00:49:48 +0000 (0:00:00.716) 0:00:56.629 ******* 2026-03-11 00:55:05.344019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.344026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.344032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.344039 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.344043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.344047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.344051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.344055 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.344059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.344064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.344070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.344079 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.344083 | orchestrator | 2026-03-11 00:55:05.344087 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-11 00:55:05.344091 | orchestrator | Wednesday 11 March 2026 00:49:49 +0000 (0:00:00.922) 0:00:57.552 ******* 2026-03-11 00:55:05.344094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.344098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.344103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.344107 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.344111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.344115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.344121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.344128 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.344134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.344138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.344142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.344146 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.344150 | orchestrator | 2026-03-11 00:55:05.344154 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-11 00:55:05.344158 | orchestrator | Wednesday 11 March 2026 00:49:51 +0000 (0:00:01.772) 0:00:59.324 ******* 2026-03-11 00:55:05.344162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.344166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.344172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.344178 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.344184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.344188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.344193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.344196 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.344200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.344204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.344208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.344215 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.344219 | orchestrator | 2026-03-11 00:55:05.344222 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-11 00:55:05.344228 | orchestrator | Wednesday 11 March 2026 00:49:52 +0000 (0:00:00.754) 0:01:00.079 ******* 2026-03-11 00:55:05.344235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.344370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.344378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.344383 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.344387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.344391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.344395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.344403 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.344410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:55:05.344414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:55:05.344418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:55:05.344422 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.344426 | orchestrator | 2026-03-11 00:55:05.344430 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-11 00:55:05.344434 | orchestrator | Wednesday 11 March 2026 00:49:52 +0000 (0:00:00.691) 0:01:00.771 ******* 2026-03-11 00:55:05.344438 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-11 00:55:05.344442 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-11 00:55:05.344446 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-11 00:55:05.344450 | orchestrator | 2026-03-11 00:55:05.344454 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-11 00:55:05.344458 | orchestrator | Wednesday 11 March 2026 00:49:54 +0000 (0:00:01.867) 0:01:02.639 ******* 2026-03-11 00:55:05.344461 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-11 00:55:05.344466 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-11 00:55:05.344500 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-11 00:55:05.344509 | orchestrator | 2026-03-11 00:55:05.344513 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-11 00:55:05.344517 | orchestrator | Wednesday 11 March 2026 00:49:56 +0000 (0:00:01.340) 0:01:03.979 ******* 2026-03-11 00:55:05.344521 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-11 00:55:05.344524 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-11 00:55:05.344531 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-11 00:55:05.344535 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-11 00:55:05.344539 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.344543 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-11 00:55:05.344547 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.344550 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-11 00:55:05.344554 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.344558 | orchestrator | 2026-03-11 00:55:05.344562 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-11 00:55:05.344566 | orchestrator | Wednesday 11 March 2026 00:49:56 +0000 (0:00:00.700) 0:01:04.680 ******* 2026-03-11 00:55:05.344573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-11 00:55:05.344580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-11 00:55:05.344584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-11 00:55:05.344588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:55:05.344592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:55:05.344599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:55:05.344603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:55:05.344609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:55:05.344616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:55:05.344620 | orchestrator | 2026-03-11 00:55:05.344624 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-11 00:55:05.344627 | orchestrator | Wednesday 11 March 2026 00:49:59 +0000 (0:00:02.320) 0:01:07.001 ******* 2026-03-11 00:55:05.344631 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.344635 | orchestrator | 2026-03-11 00:55:05.344639 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-11 00:55:05.344643 | orchestrator | Wednesday 11 March 2026 00:49:59 +0000 (0:00:00.626) 0:01:07.627 ******* 2026-03-11 00:55:05.344647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-11 00:55:05.344666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-11 00:55:05.344672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.344678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.344685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.344689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.344693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.344700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.344704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-11 00:55:05.344708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.344716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.344720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.344724 | orchestrator | 2026-03-11 00:55:05.344728 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-11 00:55:05.344733 | orchestrator | Wednesday 11 March 2026 00:50:03 +0000 (0:00:03.799) 0:01:11.426 ******* 2026-03-11 00:55:05.344741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-11 00:55:05.344751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.344758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.344824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.344831 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.344846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-11 00:55:05.344854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.345031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345046 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.345051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-11 00:55:05.345055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.345065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345074 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.345078 | orchestrator | 2026-03-11 00:55:05.345082 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-11 00:55:05.345086 | orchestrator | Wednesday 11 March 2026 00:50:04 +0000 (0:00:01.297) 0:01:12.724 ******* 2026-03-11 00:55:05.345093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-11 00:55:05.345097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-11 00:55:05.345102 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.345106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-11 00:55:05.345110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-11 00:55:05.345113 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.345117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-11 00:55:05.345121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-11 00:55:05.345125 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.345130 | orchestrator | 2026-03-11 00:55:05.345134 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-11 00:55:05.345137 | orchestrator | Wednesday 11 March 2026 00:50:05 +0000 (0:00:01.147) 0:01:13.871 ******* 2026-03-11 00:55:05.345141 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.345145 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.345149 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.345153 | orchestrator | 2026-03-11 00:55:05.345157 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-11 00:55:05.345161 | orchestrator | Wednesday 11 March 2026 00:50:07 +0000 (0:00:01.273) 0:01:15.144 ******* 2026-03-11 00:55:05.345164 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.345168 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.345172 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.345176 | orchestrator | 2026-03-11 00:55:05.345179 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-11 00:55:05.345183 | orchestrator | Wednesday 11 March 2026 00:50:09 +0000 (0:00:02.019) 0:01:17.164 ******* 2026-03-11 00:55:05.345187 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.345191 | orchestrator | 2026-03-11 00:55:05.345195 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-11 00:55:05.345198 | orchestrator | Wednesday 11 March 2026 00:50:09 +0000 (0:00:00.765) 0:01:17.930 ******* 2026-03-11 00:55:05.345205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.345217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.345230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.345243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345258 | orchestrator | 2026-03-11 00:55:05.345261 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-11 00:55:05.345265 | orchestrator | Wednesday 11 March 2026 00:50:14 +0000 (0:00:04.174) 0:01:22.105 ******* 2026-03-11 00:55:05.345269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.345273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345287 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.345293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.345297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345305 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.345309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.345315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.345328 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.345332 | orchestrator | 2026-03-11 00:55:05.345336 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-11 00:55:05.345340 | orchestrator | Wednesday 11 March 2026 00:50:14 +0000 (0:00:00.683) 0:01:22.788 ******* 2026-03-11 00:55:05.345344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-11 00:55:05.345348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-11 00:55:05.345352 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.345356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-11 00:55:05.345360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-11 00:55:05.345364 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.345370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-11 00:55:05.345376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-11 00:55:05.345384 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.345393 | orchestrator | 2026-03-11 00:55:05.345399 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-11 00:55:05.345406 | orchestrator | Wednesday 11 March 2026 00:50:15 +0000 (0:00:01.042) 0:01:23.831 ******* 2026-03-11 00:55:05.345412 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.345419 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.345425 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.345431 | orchestrator | 2026-03-11 00:55:05.345438 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-11 00:55:05.345444 | orchestrator | Wednesday 11 March 2026 00:50:17 +0000 (0:00:01.353) 0:01:25.185 ******* 2026-03-11 00:55:05.345451 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.345457 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.345464 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.345471 | orchestrator | 2026-03-11 00:55:05.345477 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-11 00:55:05.345483 | orchestrator | Wednesday 11 March 2026 00:50:19 +0000 (0:00:02.610) 0:01:27.795 ******* 2026-03-11 00:55:05.345494 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.345501 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.345683 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.345690 | orchestrator | 2026-03-11 00:55:05.345694 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-11 00:55:05.345698 | orchestrator | Wednesday 11 March 2026 00:50:20 +0000 (0:00:00.366) 0:01:28.161 ******* 2026-03-11 00:55:05.345701 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.345705 | orchestrator | 2026-03-11 00:55:05.345709 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-11 00:55:05.345713 | orchestrator | Wednesday 11 March 2026 00:50:21 +0000 (0:00:01.019) 0:01:29.181 ******* 2026-03-11 00:55:05.345722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-11 00:55:05.345730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-11 00:55:05.345737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-11 00:55:05.345745 | orchestrator | 2026-03-11 00:55:05.345754 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-11 00:55:05.345816 | orchestrator | Wednesday 11 March 2026 00:50:24 +0000 (0:00:03.585) 0:01:32.766 ******* 2026-03-11 00:55:05.345827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-11 00:55:05.345839 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.345843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-11 00:55:05.345848 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.345864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-11 00:55:05.345871 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.345877 | orchestrator | 2026-03-11 00:55:05.345884 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-11 00:55:05.345889 | orchestrator | Wednesday 11 March 2026 00:50:26 +0000 (0:00:01.665) 0:01:34.431 ******* 2026-03-11 00:55:05.345894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-11 00:55:05.345899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-11 00:55:05.345903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-11 00:55:05.345908 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.345912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-11 00:55:05.345921 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.345928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-11 00:55:05.345935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-11 00:55:05.345941 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.345948 | orchestrator | 2026-03-11 00:55:05.345952 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-11 00:55:05.345956 | orchestrator | Wednesday 11 March 2026 00:50:28 +0000 (0:00:02.159) 0:01:36.591 ******* 2026-03-11 00:55:05.345960 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.345963 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.345967 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.345971 | orchestrator | 2026-03-11 00:55:05.345974 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-11 00:55:05.345978 | orchestrator | Wednesday 11 March 2026 00:50:29 +0000 (0:00:00.607) 0:01:37.199 ******* 2026-03-11 00:55:05.345982 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.345986 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.345990 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.345993 | orchestrator | 2026-03-11 00:55:05.345997 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-11 00:55:05.346003 | orchestrator | Wednesday 11 March 2026 00:50:30 +0000 (0:00:00.961) 0:01:38.160 ******* 2026-03-11 00:55:05.346007 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.346037 | orchestrator | 2026-03-11 00:55:05.346045 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-11 00:55:05.346052 | orchestrator | Wednesday 11 March 2026 00:50:30 +0000 (0:00:00.647) 0:01:38.808 ******* 2026-03-11 00:55:05.346062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.346070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.346100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.346132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346148 | orchestrator | 2026-03-11 00:55:05.346152 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-11 00:55:05.346159 | orchestrator | Wednesday 11 March 2026 00:50:33 +0000 (0:00:03.031) 0:01:41.839 ******* 2026-03-11 00:55:05.346163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.346168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346182 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.346188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.346194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346206 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.346210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.346220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346235 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.346239 | orchestrator | 2026-03-11 00:55:05.346243 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-11 00:55:05.346247 | orchestrator | Wednesday 11 March 2026 00:50:34 +0000 (0:00:01.037) 0:01:42.877 ******* 2026-03-11 00:55:05.346251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-11 00:55:05.346255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-11 00:55:05.346260 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.346264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-11 00:55:05.346268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-11 00:55:05.346271 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.346275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-11 00:55:05.346279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-11 00:55:05.346493 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.346503 | orchestrator | 2026-03-11 00:55:05.346511 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-11 00:55:05.346518 | orchestrator | Wednesday 11 March 2026 00:50:36 +0000 (0:00:01.449) 0:01:44.326 ******* 2026-03-11 00:55:05.346525 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.346533 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.346540 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.346547 | orchestrator | 2026-03-11 00:55:05.346554 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-11 00:55:05.346598 | orchestrator | Wednesday 11 March 2026 00:50:37 +0000 (0:00:01.003) 0:01:45.329 ******* 2026-03-11 00:55:05.346610 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.346624 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.346631 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.346640 | orchestrator | 2026-03-11 00:55:05.346666 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-11 00:55:05.346673 | orchestrator | Wednesday 11 March 2026 00:50:40 +0000 (0:00:02.629) 0:01:47.959 ******* 2026-03-11 00:55:05.346680 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.346688 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.346695 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.346703 | orchestrator | 2026-03-11 00:55:05.346709 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-11 00:55:05.346718 | orchestrator | Wednesday 11 March 2026 00:50:40 +0000 (0:00:00.552) 0:01:48.511 ******* 2026-03-11 00:55:05.346724 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.346730 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.346735 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.346741 | orchestrator | 2026-03-11 00:55:05.346748 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-11 00:55:05.346754 | orchestrator | Wednesday 11 March 2026 00:50:40 +0000 (0:00:00.277) 0:01:48.789 ******* 2026-03-11 00:55:05.346760 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.346785 | orchestrator | 2026-03-11 00:55:05.346792 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-11 00:55:05.346798 | orchestrator | Wednesday 11 March 2026 00:50:41 +0000 (0:00:00.740) 0:01:49.529 ******* 2026-03-11 00:55:05.346805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 00:55:05.346814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 00:55:05.346821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 00:55:05.346856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 00:55:05.346984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.346991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 00:55:05.347154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 00:55:05.347182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347240 | orchestrator | 2026-03-11 00:55:05.347247 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-11 00:55:05.347254 | orchestrator | Wednesday 11 March 2026 00:50:45 +0000 (0:00:04.163) 0:01:53.693 ******* 2026-03-11 00:55:05.347261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 00:55:05.347287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 00:55:05.347298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 00:55:05.347335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 00:55:05.347366 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.347373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347411 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.347564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 00:55:05.347577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 00:55:05.347585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.347717 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.347723 | orchestrator | 2026-03-11 00:55:05.347730 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-11 00:55:05.347736 | orchestrator | Wednesday 11 March 2026 00:50:46 +0000 (0:00:00.849) 0:01:54.543 ******* 2026-03-11 00:55:05.347746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-11 00:55:05.347753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-11 00:55:05.347760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-11 00:55:05.347802 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.347809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-11 00:55:05.347816 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.347822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-11 00:55:05.347828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-11 00:55:05.347834 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.347840 | orchestrator | 2026-03-11 00:55:05.347846 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-11 00:55:05.347857 | orchestrator | Wednesday 11 March 2026 00:50:47 +0000 (0:00:00.886) 0:01:55.429 ******* 2026-03-11 00:55:05.347863 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.347868 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.347874 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.347880 | orchestrator | 2026-03-11 00:55:05.347886 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-11 00:55:05.347892 | orchestrator | Wednesday 11 March 2026 00:50:48 +0000 (0:00:01.461) 0:01:56.891 ******* 2026-03-11 00:55:05.347898 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.347904 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.347910 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.347916 | orchestrator | 2026-03-11 00:55:05.347921 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-11 00:55:05.347927 | orchestrator | Wednesday 11 March 2026 00:50:50 +0000 (0:00:01.732) 0:01:58.623 ******* 2026-03-11 00:55:05.347933 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.347939 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.347945 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.347950 | orchestrator | 2026-03-11 00:55:05.347956 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-11 00:55:05.347962 | orchestrator | Wednesday 11 March 2026 00:50:51 +0000 (0:00:00.586) 0:01:59.210 ******* 2026-03-11 00:55:05.347968 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.347974 | orchestrator | 2026-03-11 00:55:05.347980 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-11 00:55:05.347986 | orchestrator | Wednesday 11 March 2026 00:50:52 +0000 (0:00:00.811) 0:02:00.022 ******* 2026-03-11 00:55:05.348173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 00:55:05.348187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-11 00:55:05.348197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 00:55:05.348253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-11 00:55:05.348264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 00:55:05.348280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-11 00:55:05.348288 | orchestrator | 2026-03-11 00:55:05.348292 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-11 00:55:05.348334 | orchestrator | Wednesday 11 March 2026 00:50:56 +0000 (0:00:04.272) 0:02:04.294 ******* 2026-03-11 00:55:05.348340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 00:55:05.348356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-11 00:55:05.348364 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.348469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 00:55:05.348488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-11 00:55:05.348496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 00:55:05.348501 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.348514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-11 00:55:05.348668 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.348675 | orchestrator | 2026-03-11 00:55:05.348679 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-11 00:55:05.348683 | orchestrator | Wednesday 11 March 2026 00:50:58 +0000 (0:00:02.609) 0:02:06.904 ******* 2026-03-11 00:55:05.348693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-11 00:55:05.348697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-11 00:55:05.348701 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.348705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-11 00:55:05.348709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-11 00:55:05.348713 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.348718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-11 00:55:05.348722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-11 00:55:05.348725 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.348729 | orchestrator | 2026-03-11 00:55:05.348735 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-11 00:55:05.348742 | orchestrator | Wednesday 11 March 2026 00:51:01 +0000 (0:00:02.903) 0:02:09.807 ******* 2026-03-11 00:55:05.348748 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.348754 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.348761 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.348779 | orchestrator | 2026-03-11 00:55:05.348785 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-11 00:55:05.348791 | orchestrator | Wednesday 11 March 2026 00:51:02 +0000 (0:00:01.106) 0:02:10.914 ******* 2026-03-11 00:55:05.348802 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.348809 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.348815 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.348822 | orchestrator | 2026-03-11 00:55:05.348846 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-11 00:55:05.348850 | orchestrator | Wednesday 11 March 2026 00:51:04 +0000 (0:00:01.675) 0:02:12.589 ******* 2026-03-11 00:55:05.348854 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.348858 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.348862 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.348866 | orchestrator | 2026-03-11 00:55:05.348869 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-11 00:55:05.348876 | orchestrator | Wednesday 11 March 2026 00:51:05 +0000 (0:00:00.376) 0:02:12.966 ******* 2026-03-11 00:55:05.348880 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.348884 | orchestrator | 2026-03-11 00:55:05.348887 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-11 00:55:05.348891 | orchestrator | Wednesday 11 March 2026 00:51:05 +0000 (0:00:00.827) 0:02:13.793 ******* 2026-03-11 00:55:05.348895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 00:55:05.348901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 00:55:05.348905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 00:55:05.348909 | orchestrator | 2026-03-11 00:55:05.348913 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-11 00:55:05.348917 | orchestrator | Wednesday 11 March 2026 00:51:09 +0000 (0:00:03.351) 0:02:17.145 ******* 2026-03-11 00:55:05.348920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 00:55:05.348928 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.348941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 00:55:05.348946 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.348953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 00:55:05.348960 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.348967 | orchestrator | 2026-03-11 00:55:05.348976 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-11 00:55:05.348982 | orchestrator | Wednesday 11 March 2026 00:51:09 +0000 (0:00:00.527) 0:02:17.672 ******* 2026-03-11 00:55:05.348988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-11 00:55:05.348995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-11 00:55:05.349001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-11 00:55:05.349007 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.349012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-11 00:55:05.349018 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.349068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-11 00:55:05.349114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-11 00:55:05.349123 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.349129 | orchestrator | 2026-03-11 00:55:05.349374 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-11 00:55:05.349389 | orchestrator | Wednesday 11 March 2026 00:51:10 +0000 (0:00:00.594) 0:02:18.267 ******* 2026-03-11 00:55:05.349398 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.349402 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.349406 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.349410 | orchestrator | 2026-03-11 00:55:05.349414 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-11 00:55:05.349417 | orchestrator | Wednesday 11 March 2026 00:51:11 +0000 (0:00:01.176) 0:02:19.443 ******* 2026-03-11 00:55:05.349421 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.349425 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.349429 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.349432 | orchestrator | 2026-03-11 00:55:05.349436 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-11 00:55:05.349440 | orchestrator | Wednesday 11 March 2026 00:51:13 +0000 (0:00:01.782) 0:02:21.226 ******* 2026-03-11 00:55:05.349444 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.349447 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.349451 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.349455 | orchestrator | 2026-03-11 00:55:05.349459 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-11 00:55:05.349463 | orchestrator | Wednesday 11 March 2026 00:51:13 +0000 (0:00:00.428) 0:02:21.655 ******* 2026-03-11 00:55:05.349466 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.349470 | orchestrator | 2026-03-11 00:55:05.349474 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-11 00:55:05.349478 | orchestrator | Wednesday 11 March 2026 00:51:14 +0000 (0:00:00.927) 0:02:22.583 ******* 2026-03-11 00:55:05.349557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:55:05.349565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:55:05.349657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:55:05.349667 | orchestrator | 2026-03-11 00:55:05.349671 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-11 00:55:05.349675 | orchestrator | Wednesday 11 March 2026 00:51:18 +0000 (0:00:03.887) 0:02:26.470 ******* 2026-03-11 00:55:05.349867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:55:05.349880 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.349889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:55:05.349898 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.349932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:55:05.349938 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.349942 | orchestrator | 2026-03-11 00:55:05.349946 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-11 00:55:05.349949 | orchestrator | Wednesday 11 March 2026 00:51:19 +0000 (0:00:00.940) 0:02:27.411 ******* 2026-03-11 00:55:05.349954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-11 00:55:05.349959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-11 00:55:05.349964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-11 00:55:05.349972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-11 00:55:05.349976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-11 00:55:05.349980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-11 00:55:05.349984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-11 00:55:05.349988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-11 00:55:05.349992 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.349996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-11 00:55:05.350000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-11 00:55:05.350053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-11 00:55:05.350063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-11 00:55:05.350067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-11 00:55:05.350071 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.350075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-11 00:55:05.350082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-11 00:55:05.350086 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.350090 | orchestrator | 2026-03-11 00:55:05.350093 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-11 00:55:05.350097 | orchestrator | Wednesday 11 March 2026 00:51:20 +0000 (0:00:01.054) 0:02:28.465 ******* 2026-03-11 00:55:05.350101 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.350105 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.350109 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.350113 | orchestrator | 2026-03-11 00:55:05.350116 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-11 00:55:05.350120 | orchestrator | Wednesday 11 March 2026 00:51:21 +0000 (0:00:01.312) 0:02:29.777 ******* 2026-03-11 00:55:05.350124 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.350128 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.350132 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.350136 | orchestrator | 2026-03-11 00:55:05.350139 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-11 00:55:05.350143 | orchestrator | Wednesday 11 March 2026 00:51:23 +0000 (0:00:02.064) 0:02:31.842 ******* 2026-03-11 00:55:05.350147 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.350151 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.350155 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.350158 | orchestrator | 2026-03-11 00:55:05.350162 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-11 00:55:05.350166 | orchestrator | Wednesday 11 March 2026 00:51:24 +0000 (0:00:00.327) 0:02:32.169 ******* 2026-03-11 00:55:05.350170 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.350174 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.350177 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.350181 | orchestrator | 2026-03-11 00:55:05.350185 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-11 00:55:05.350189 | orchestrator | Wednesday 11 March 2026 00:51:24 +0000 (0:00:00.759) 0:02:32.929 ******* 2026-03-11 00:55:05.350193 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.350196 | orchestrator | 2026-03-11 00:55:05.350200 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-11 00:55:05.350204 | orchestrator | Wednesday 11 March 2026 00:51:25 +0000 (0:00:00.958) 0:02:33.887 ******* 2026-03-11 00:55:05.350208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 00:55:05.350257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 00:55:05.350269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 00:55:05.350274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 00:55:05.350278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 00:55:05.350344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 00:55:05.350532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 00:55:05.350548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 00:55:05.350553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 00:55:05.350557 | orchestrator | 2026-03-11 00:55:05.350561 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-11 00:55:05.350565 | orchestrator | Wednesday 11 March 2026 00:51:29 +0000 (0:00:03.682) 0:02:37.570 ******* 2026-03-11 00:55:05.350569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 00:55:05.350574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 00:55:05.350586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 00:55:05.350593 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.350628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 00:55:05.350634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 00:55:05.350638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 00:55:05.350642 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.350647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 00:55:05.350651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 00:55:05.350687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 00:55:05.350692 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.350696 | orchestrator | 2026-03-11 00:55:05.350700 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-11 00:55:05.350729 | orchestrator | Wednesday 11 March 2026 00:51:30 +0000 (0:00:00.617) 0:02:38.187 ******* 2026-03-11 00:55:05.350738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-11 00:55:05.350745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-11 00:55:05.350829 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.350835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-11 00:55:05.350839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-11 00:55:05.350843 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.350846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-11 00:55:05.350850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-11 00:55:05.350854 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.350858 | orchestrator | 2026-03-11 00:55:05.350862 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-11 00:55:05.350865 | orchestrator | Wednesday 11 March 2026 00:51:31 +0000 (0:00:00.840) 0:02:39.027 ******* 2026-03-11 00:55:05.350869 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.350873 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.350877 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.350880 | orchestrator | 2026-03-11 00:55:05.350884 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-11 00:55:05.350888 | orchestrator | Wednesday 11 March 2026 00:51:32 +0000 (0:00:01.202) 0:02:40.230 ******* 2026-03-11 00:55:05.350892 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.350896 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.350903 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.350907 | orchestrator | 2026-03-11 00:55:05.350911 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-11 00:55:05.350915 | orchestrator | Wednesday 11 March 2026 00:51:34 +0000 (0:00:02.064) 0:02:42.294 ******* 2026-03-11 00:55:05.350919 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.350922 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.350926 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.350930 | orchestrator | 2026-03-11 00:55:05.350933 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-11 00:55:05.350937 | orchestrator | Wednesday 11 March 2026 00:51:34 +0000 (0:00:00.531) 0:02:42.825 ******* 2026-03-11 00:55:05.350941 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.350945 | orchestrator | 2026-03-11 00:55:05.350948 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-11 00:55:05.350952 | orchestrator | Wednesday 11 March 2026 00:51:35 +0000 (0:00:01.060) 0:02:43.886 ******* 2026-03-11 00:55:05.351225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 00:55:05.351238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.351243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 00:55:05.351247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.351300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 00:55:05.351359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.351365 | orchestrator | 2026-03-11 00:55:05.351369 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-11 00:55:05.351373 | orchestrator | Wednesday 11 March 2026 00:51:39 +0000 (0:00:03.376) 0:02:47.262 ******* 2026-03-11 00:55:05.351378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 00:55:05.351385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.351399 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.351408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 00:55:05.351415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.351421 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.351470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 00:55:05.351480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.351486 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.351493 | orchestrator | 2026-03-11 00:55:05.351499 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-11 00:55:05.351505 | orchestrator | Wednesday 11 March 2026 00:51:40 +0000 (0:00:00.951) 0:02:48.214 ******* 2026-03-11 00:55:05.351512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-11 00:55:05.351647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-11 00:55:05.351654 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.351659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-11 00:55:05.351663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-11 00:55:05.351670 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.351678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-11 00:55:05.351687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-11 00:55:05.351693 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.351699 | orchestrator | 2026-03-11 00:55:05.351729 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-11 00:55:05.351738 | orchestrator | Wednesday 11 March 2026 00:51:41 +0000 (0:00:00.876) 0:02:49.090 ******* 2026-03-11 00:55:05.351744 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.351751 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.351757 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.351778 | orchestrator | 2026-03-11 00:55:05.351811 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-11 00:55:05.351816 | orchestrator | Wednesday 11 March 2026 00:51:42 +0000 (0:00:01.418) 0:02:50.509 ******* 2026-03-11 00:55:05.351820 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.351824 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.351828 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.351832 | orchestrator | 2026-03-11 00:55:05.351836 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-11 00:55:05.351840 | orchestrator | Wednesday 11 March 2026 00:51:44 +0000 (0:00:02.314) 0:02:52.823 ******* 2026-03-11 00:55:05.351843 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.351847 | orchestrator | 2026-03-11 00:55:05.351851 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-11 00:55:05.351855 | orchestrator | Wednesday 11 March 2026 00:51:46 +0000 (0:00:01.367) 0:02:54.191 ******* 2026-03-11 00:55:05.351996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-11 00:55:05.352006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-11 00:55:05.352070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-11 00:55:05.352109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352170 | orchestrator | 2026-03-11 00:55:05.352176 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-11 00:55:05.352183 | orchestrator | Wednesday 11 March 2026 00:51:49 +0000 (0:00:03.580) 0:02:57.771 ******* 2026-03-11 00:55:05.352192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-11 00:55:05.352204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352225 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.352232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-11 00:55:05.352275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352306 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.352312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-11 00:55:05.352319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.352392 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.352399 | orchestrator | 2026-03-11 00:55:05.352412 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-11 00:55:05.352418 | orchestrator | Wednesday 11 March 2026 00:51:50 +0000 (0:00:00.790) 0:02:58.562 ******* 2026-03-11 00:55:05.352425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-11 00:55:05.352431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-11 00:55:05.352438 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.352445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-11 00:55:05.352451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-11 00:55:05.352458 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.352465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-11 00:55:05.352471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-11 00:55:05.352477 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.352484 | orchestrator | 2026-03-11 00:55:05.352490 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-11 00:55:05.352497 | orchestrator | Wednesday 11 March 2026 00:51:52 +0000 (0:00:01.490) 0:03:00.053 ******* 2026-03-11 00:55:05.352503 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.352509 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.352515 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.352522 | orchestrator | 2026-03-11 00:55:05.352528 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-11 00:55:05.352534 | orchestrator | Wednesday 11 March 2026 00:51:53 +0000 (0:00:01.286) 0:03:01.340 ******* 2026-03-11 00:55:05.352540 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.352555 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.352562 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.352568 | orchestrator | 2026-03-11 00:55:05.352575 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-11 00:55:05.352582 | orchestrator | Wednesday 11 March 2026 00:51:55 +0000 (0:00:02.217) 0:03:03.558 ******* 2026-03-11 00:55:05.352589 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.352595 | orchestrator | 2026-03-11 00:55:05.352602 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-11 00:55:05.352608 | orchestrator | Wednesday 11 March 2026 00:51:56 +0000 (0:00:01.326) 0:03:04.884 ******* 2026-03-11 00:55:05.352616 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-11 00:55:05.352622 | orchestrator | 2026-03-11 00:55:05.352629 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-11 00:55:05.352635 | orchestrator | Wednesday 11 March 2026 00:52:00 +0000 (0:00:03.111) 0:03:07.996 ******* 2026-03-11 00:55:05.352686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:55:05.352700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-11 00:55:05.352707 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.352714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:55:05.352725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-11 00:55:05.352731 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.352784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:55:05.352795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-11 00:55:05.352800 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.352803 | orchestrator | 2026-03-11 00:55:05.352807 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-11 00:55:05.352811 | orchestrator | Wednesday 11 March 2026 00:52:02 +0000 (0:00:02.275) 0:03:10.271 ******* 2026-03-11 00:55:05.352835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:55:05.352845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-11 00:55:05.352850 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.352854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:55:05.352858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-11 00:55:05.352864 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.352894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:55:05.352900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-11 00:55:05.352904 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.352908 | orchestrator | 2026-03-11 00:55:05.352912 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-11 00:55:05.352916 | orchestrator | Wednesday 11 March 2026 00:52:04 +0000 (0:00:02.303) 0:03:12.575 ******* 2026-03-11 00:55:05.352920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-11 00:55:05.352928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-11 00:55:05.352932 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.352936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-11 00:55:05.352965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-11 00:55:05.352971 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.352977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-11 00:55:05.352982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-11 00:55:05.352986 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.352989 | orchestrator | 2026-03-11 00:55:05.352993 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-11 00:55:05.353005 | orchestrator | Wednesday 11 March 2026 00:52:07 +0000 (0:00:02.880) 0:03:15.456 ******* 2026-03-11 00:55:05.353009 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.353013 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.353017 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.353021 | orchestrator | 2026-03-11 00:55:05.353025 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-11 00:55:05.353029 | orchestrator | Wednesday 11 March 2026 00:52:09 +0000 (0:00:01.873) 0:03:17.330 ******* 2026-03-11 00:55:05.353032 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.353039 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.353043 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.353047 | orchestrator | 2026-03-11 00:55:05.353050 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-11 00:55:05.353054 | orchestrator | Wednesday 11 March 2026 00:52:10 +0000 (0:00:01.435) 0:03:18.766 ******* 2026-03-11 00:55:05.353058 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.353062 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.353066 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.353069 | orchestrator | 2026-03-11 00:55:05.353073 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-11 00:55:05.353077 | orchestrator | Wednesday 11 March 2026 00:52:11 +0000 (0:00:00.312) 0:03:19.078 ******* 2026-03-11 00:55:05.353081 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.353084 | orchestrator | 2026-03-11 00:55:05.353088 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-11 00:55:05.353092 | orchestrator | Wednesday 11 March 2026 00:52:12 +0000 (0:00:01.318) 0:03:20.396 ******* 2026-03-11 00:55:05.353096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-11 00:55:05.353131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-11 00:55:05.353139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-11 00:55:05.353143 | orchestrator | 2026-03-11 00:55:05.353147 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-11 00:55:05.353151 | orchestrator | Wednesday 11 March 2026 00:52:14 +0000 (0:00:01.596) 0:03:21.993 ******* 2026-03-11 00:55:05.353155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-11 00:55:05.353162 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.353166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-11 00:55:05.353170 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.353174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-11 00:55:05.353178 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.353182 | orchestrator | 2026-03-11 00:55:05.353186 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-11 00:55:05.353189 | orchestrator | Wednesday 11 March 2026 00:52:14 +0000 (0:00:00.469) 0:03:22.462 ******* 2026-03-11 00:55:05.353194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-11 00:55:05.353198 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.353226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-11 00:55:05.353231 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.353237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-11 00:55:05.353247 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.353251 | orchestrator | 2026-03-11 00:55:05.353255 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-11 00:55:05.353259 | orchestrator | Wednesday 11 March 2026 00:52:15 +0000 (0:00:00.847) 0:03:23.310 ******* 2026-03-11 00:55:05.353263 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.353266 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.353275 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.353278 | orchestrator | 2026-03-11 00:55:05.353282 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-11 00:55:05.353286 | orchestrator | Wednesday 11 March 2026 00:52:15 +0000 (0:00:00.443) 0:03:23.753 ******* 2026-03-11 00:55:05.353290 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.353294 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.353297 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.353301 | orchestrator | 2026-03-11 00:55:05.353305 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-11 00:55:05.353309 | orchestrator | Wednesday 11 March 2026 00:52:17 +0000 (0:00:01.307) 0:03:25.061 ******* 2026-03-11 00:55:05.353313 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.353316 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.353320 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.353324 | orchestrator | 2026-03-11 00:55:05.353328 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-11 00:55:05.353332 | orchestrator | Wednesday 11 March 2026 00:52:17 +0000 (0:00:00.323) 0:03:25.385 ******* 2026-03-11 00:55:05.353335 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.353339 | orchestrator | 2026-03-11 00:55:05.353343 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-11 00:55:05.353347 | orchestrator | Wednesday 11 March 2026 00:52:18 +0000 (0:00:01.517) 0:03:26.902 ******* 2026-03-11 00:55:05.353351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 00:55:05.353355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 00:55:05.353400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-11 00:55:05.353436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 00:55:05.353455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-11 00:55:05.353514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-11 00:55:05.353522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:55:05.353583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-11 00:55:05.353681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:55:05.353685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:55:05.353695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:55:05.353699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-11 00:55:05.353826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:55:05.353831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-11 00:55:05.353835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:55:05.353839 | orchestrator | 2026-03-11 00:55:05.353843 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-11 00:55:05.353846 | orchestrator | Wednesday 11 March 2026 00:52:23 +0000 (0:00:04.897) 0:03:31.800 ******* 2026-03-11 00:55:05.353850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 00:55:05.353884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-11 00:55:05.353909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:55:05.353963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.353967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.353978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 00:55:05.354034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-11 00:55:05.354039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:55:05.354050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354054 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.354092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-11 00:55:05.354103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.354126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.354130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:55:05.354168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 00:55:05.354172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-11 00:55:05.354187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.354228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-11 00:55:05.354245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-11 00:55:05.354275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:55:05.354284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354288 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.354292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.354296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.354303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:55:05.354311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-11 00:55:05.354333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:55:05.354337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-11 00:55:05.354348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:55:05.354359 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.354363 | orchestrator | 2026-03-11 00:55:05.354367 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-11 00:55:05.354371 | orchestrator | Wednesday 11 March 2026 00:52:25 +0000 (0:00:02.023) 0:03:33.824 ******* 2026-03-11 00:55:05.354375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-11 00:55:05.354379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-11 00:55:05.354383 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.354397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-11 00:55:05.354402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-11 00:55:05.354406 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.354412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-11 00:55:05.354416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-11 00:55:05.354419 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.354423 | orchestrator | 2026-03-11 00:55:05.354427 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-11 00:55:05.354431 | orchestrator | Wednesday 11 March 2026 00:52:28 +0000 (0:00:02.281) 0:03:36.105 ******* 2026-03-11 00:55:05.354435 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.354441 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.354445 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.354448 | orchestrator | 2026-03-11 00:55:05.354452 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-11 00:55:05.354456 | orchestrator | Wednesday 11 March 2026 00:52:29 +0000 (0:00:01.281) 0:03:37.387 ******* 2026-03-11 00:55:05.354460 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.354464 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.354467 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.354471 | orchestrator | 2026-03-11 00:55:05.354475 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-11 00:55:05.354484 | orchestrator | Wednesday 11 March 2026 00:52:31 +0000 (0:00:02.160) 0:03:39.548 ******* 2026-03-11 00:55:05.354488 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.354492 | orchestrator | 2026-03-11 00:55:05.354495 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-11 00:55:05.354499 | orchestrator | Wednesday 11 March 2026 00:52:32 +0000 (0:00:01.271) 0:03:40.819 ******* 2026-03-11 00:55:05.354503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.354508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.354523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.354527 | orchestrator | 2026-03-11 00:55:05.354534 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-11 00:55:05.354540 | orchestrator | Wednesday 11 March 2026 00:52:36 +0000 (0:00:03.970) 0:03:44.790 ******* 2026-03-11 00:55:05.354544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.354548 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.354552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.354556 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.354560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.354564 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.354568 | orchestrator | 2026-03-11 00:55:05.354572 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-11 00:55:05.354576 | orchestrator | Wednesday 11 March 2026 00:52:37 +0000 (0:00:00.516) 0:03:45.307 ******* 2026-03-11 00:55:05.354579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-11 00:55:05.354584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-11 00:55:05.354588 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.354602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-11 00:55:05.354609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-11 00:55:05.354616 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.354620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-11 00:55:05.354624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-11 00:55:05.354627 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.354631 | orchestrator | 2026-03-11 00:55:05.354635 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-11 00:55:05.354639 | orchestrator | Wednesday 11 March 2026 00:52:38 +0000 (0:00:00.728) 0:03:46.035 ******* 2026-03-11 00:55:05.354643 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.354647 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.354650 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.354654 | orchestrator | 2026-03-11 00:55:05.354658 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-11 00:55:05.354662 | orchestrator | Wednesday 11 March 2026 00:52:39 +0000 (0:00:01.783) 0:03:47.819 ******* 2026-03-11 00:55:05.354666 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.354670 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.354673 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.354677 | orchestrator | 2026-03-11 00:55:05.354681 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-11 00:55:05.354685 | orchestrator | Wednesday 11 March 2026 00:52:41 +0000 (0:00:01.770) 0:03:49.590 ******* 2026-03-11 00:55:05.354689 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.354693 | orchestrator | 2026-03-11 00:55:05.354696 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-11 00:55:05.354700 | orchestrator | Wednesday 11 March 2026 00:52:43 +0000 (0:00:01.590) 0:03:51.180 ******* 2026-03-11 00:55:05.354705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.354709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.354739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.354743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354812 | orchestrator | 2026-03-11 00:55:05.354816 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-11 00:55:05.354820 | orchestrator | Wednesday 11 March 2026 00:52:47 +0000 (0:00:04.721) 0:03:55.901 ******* 2026-03-11 00:55:05.354824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.354829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354841 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.354863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.354871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354884 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.354892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.354907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.354944 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.354951 | orchestrator | 2026-03-11 00:55:05.354957 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-11 00:55:05.354964 | orchestrator | Wednesday 11 March 2026 00:52:49 +0000 (0:00:01.195) 0:03:57.097 ******* 2026-03-11 00:55:05.354971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-11 00:55:05.354978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-11 00:55:05.354986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-11 00:55:05.354993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-11 00:55:05.355000 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.355006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-11 00:55:05.355011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-11 00:55:05.355016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-11 00:55:05.355024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-11 00:55:05.355029 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.355033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-11 00:55:05.355038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-11 00:55:05.355043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-11 00:55:05.355047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-11 00:55:05.355052 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.355056 | orchestrator | 2026-03-11 00:55:05.355061 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-11 00:55:05.355065 | orchestrator | Wednesday 11 March 2026 00:52:50 +0000 (0:00:00.996) 0:03:58.093 ******* 2026-03-11 00:55:05.355070 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.355074 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.355078 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.355083 | orchestrator | 2026-03-11 00:55:05.355087 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-11 00:55:05.355092 | orchestrator | Wednesday 11 March 2026 00:52:51 +0000 (0:00:01.481) 0:03:59.575 ******* 2026-03-11 00:55:05.355096 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.355101 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.355105 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.355109 | orchestrator | 2026-03-11 00:55:05.355130 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-11 00:55:05.355135 | orchestrator | Wednesday 11 March 2026 00:52:53 +0000 (0:00:02.165) 0:04:01.740 ******* 2026-03-11 00:55:05.355140 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.355144 | orchestrator | 2026-03-11 00:55:05.355149 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-11 00:55:05.355153 | orchestrator | Wednesday 11 March 2026 00:52:55 +0000 (0:00:01.531) 0:04:03.272 ******* 2026-03-11 00:55:05.355160 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-11 00:55:05.355166 | orchestrator | 2026-03-11 00:55:05.355170 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-11 00:55:05.355175 | orchestrator | Wednesday 11 March 2026 00:52:56 +0000 (0:00:00.867) 0:04:04.139 ******* 2026-03-11 00:55:05.355179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-11 00:55:05.355185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-11 00:55:05.355192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-11 00:55:05.355197 | orchestrator | 2026-03-11 00:55:05.355202 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-11 00:55:05.355207 | orchestrator | Wednesday 11 March 2026 00:53:00 +0000 (0:00:04.632) 0:04:08.771 ******* 2026-03-11 00:55:05.355211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:55:05.355216 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.355221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:55:05.355225 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.355230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:55:05.355234 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.355239 | orchestrator | 2026-03-11 00:55:05.355254 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-11 00:55:05.355259 | orchestrator | Wednesday 11 March 2026 00:53:01 +0000 (0:00:01.022) 0:04:09.794 ******* 2026-03-11 00:55:05.355264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-11 00:55:05.355271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-11 00:55:05.355276 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.355281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-11 00:55:05.355285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-11 00:55:05.355292 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.355297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-11 00:55:05.355301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-11 00:55:05.355305 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.355309 | orchestrator | 2026-03-11 00:55:05.355312 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-11 00:55:05.355316 | orchestrator | Wednesday 11 March 2026 00:53:03 +0000 (0:00:01.632) 0:04:11.426 ******* 2026-03-11 00:55:05.355320 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.355324 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.355328 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.355332 | orchestrator | 2026-03-11 00:55:05.355335 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-11 00:55:05.355339 | orchestrator | Wednesday 11 March 2026 00:53:06 +0000 (0:00:02.839) 0:04:14.266 ******* 2026-03-11 00:55:05.355343 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.355347 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.355351 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.355354 | orchestrator | 2026-03-11 00:55:05.355358 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-11 00:55:05.355362 | orchestrator | Wednesday 11 March 2026 00:53:09 +0000 (0:00:03.225) 0:04:17.491 ******* 2026-03-11 00:55:05.355366 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-11 00:55:05.355370 | orchestrator | 2026-03-11 00:55:05.355374 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-11 00:55:05.355378 | orchestrator | Wednesday 11 March 2026 00:53:10 +0000 (0:00:01.377) 0:04:18.868 ******* 2026-03-11 00:55:05.355382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:55:05.355386 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.355390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:55:05.355394 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.355409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:55:05.355416 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.355420 | orchestrator | 2026-03-11 00:55:05.355426 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-11 00:55:05.355430 | orchestrator | Wednesday 11 March 2026 00:53:12 +0000 (0:00:01.271) 0:04:20.140 ******* 2026-03-11 00:55:05.355434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:55:05.355438 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.355442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:55:05.355446 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.355450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:55:05.355454 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.355458 | orchestrator | 2026-03-11 00:55:05.355461 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-11 00:55:05.355465 | orchestrator | Wednesday 11 March 2026 00:53:13 +0000 (0:00:01.375) 0:04:21.515 ******* 2026-03-11 00:55:05.355469 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.355473 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.355477 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.355481 | orchestrator | 2026-03-11 00:55:05.355484 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-11 00:55:05.355488 | orchestrator | Wednesday 11 March 2026 00:53:15 +0000 (0:00:01.959) 0:04:23.475 ******* 2026-03-11 00:55:05.355492 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.355496 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.355500 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.355504 | orchestrator | 2026-03-11 00:55:05.355508 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-11 00:55:05.355512 | orchestrator | Wednesday 11 March 2026 00:53:18 +0000 (0:00:02.490) 0:04:25.966 ******* 2026-03-11 00:55:05.355516 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.355520 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.355524 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.355527 | orchestrator | 2026-03-11 00:55:05.355531 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-11 00:55:05.355535 | orchestrator | Wednesday 11 March 2026 00:53:21 +0000 (0:00:03.061) 0:04:29.027 ******* 2026-03-11 00:55:05.355542 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-11 00:55:05.355546 | orchestrator | 2026-03-11 00:55:05.355550 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-11 00:55:05.355554 | orchestrator | Wednesday 11 March 2026 00:53:21 +0000 (0:00:00.880) 0:04:29.908 ******* 2026-03-11 00:55:05.355570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-11 00:55:05.355574 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.355580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-11 00:55:05.355584 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.355588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-11 00:55:05.355592 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.355596 | orchestrator | 2026-03-11 00:55:05.355600 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-11 00:55:05.355604 | orchestrator | Wednesday 11 March 2026 00:53:23 +0000 (0:00:01.319) 0:04:31.228 ******* 2026-03-11 00:55:05.355608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-11 00:55:05.355612 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.355616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-11 00:55:05.355620 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.355624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-11 00:55:05.355631 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.355635 | orchestrator | 2026-03-11 00:55:05.355639 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-11 00:55:05.355643 | orchestrator | Wednesday 11 March 2026 00:53:24 +0000 (0:00:01.390) 0:04:32.619 ******* 2026-03-11 00:55:05.355646 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.355650 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.355654 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.355658 | orchestrator | 2026-03-11 00:55:05.355662 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-11 00:55:05.355665 | orchestrator | Wednesday 11 March 2026 00:53:26 +0000 (0:00:01.616) 0:04:34.235 ******* 2026-03-11 00:55:05.355669 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.355673 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.355677 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.355681 | orchestrator | 2026-03-11 00:55:05.355685 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-11 00:55:05.355689 | orchestrator | Wednesday 11 March 2026 00:53:28 +0000 (0:00:02.418) 0:04:36.653 ******* 2026-03-11 00:55:05.355693 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.355696 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.355700 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.355704 | orchestrator | 2026-03-11 00:55:05.355708 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-11 00:55:05.355712 | orchestrator | Wednesday 11 March 2026 00:53:32 +0000 (0:00:03.439) 0:04:40.093 ******* 2026-03-11 00:55:05.355726 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.355730 | orchestrator | 2026-03-11 00:55:05.355734 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-11 00:55:05.355738 | orchestrator | Wednesday 11 March 2026 00:53:33 +0000 (0:00:01.661) 0:04:41.755 ******* 2026-03-11 00:55:05.355744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.355749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 00:55:05.355753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.355760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.355778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.355801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.355811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 00:55:05.355817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.355824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.355835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.355842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.355866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 00:55:05.355874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.355878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.355882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.355889 | orchestrator | 2026-03-11 00:55:05.355892 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-11 00:55:05.355896 | orchestrator | Wednesday 11 March 2026 00:53:36 +0000 (0:00:03.023) 0:04:44.778 ******* 2026-03-11 00:55:05.355901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.355905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 00:55:05.355920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.355931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.355938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.355952 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.355958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.355965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 00:55:05.355972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.355996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.356006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.356013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.356025 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.356031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 00:55:05.356038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.356045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 00:55:05.356069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:55:05.356077 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.356083 | orchestrator | 2026-03-11 00:55:05.356090 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-11 00:55:05.356097 | orchestrator | Wednesday 11 March 2026 00:53:37 +0000 (0:00:00.642) 0:04:45.420 ******* 2026-03-11 00:55:05.356103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-11 00:55:05.356114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-11 00:55:05.356118 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.356122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-11 00:55:05.356129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-11 00:55:05.356133 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.356137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-11 00:55:05.356141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-11 00:55:05.356146 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.356152 | orchestrator | 2026-03-11 00:55:05.356161 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-11 00:55:05.356169 | orchestrator | Wednesday 11 March 2026 00:53:39 +0000 (0:00:01.540) 0:04:46.961 ******* 2026-03-11 00:55:05.356175 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.356181 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.356187 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.356193 | orchestrator | 2026-03-11 00:55:05.356198 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-11 00:55:05.356203 | orchestrator | Wednesday 11 March 2026 00:53:40 +0000 (0:00:01.479) 0:04:48.440 ******* 2026-03-11 00:55:05.356209 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.356214 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.356220 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.356227 | orchestrator | 2026-03-11 00:55:05.356233 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-11 00:55:05.356240 | orchestrator | Wednesday 11 March 2026 00:53:42 +0000 (0:00:02.137) 0:04:50.577 ******* 2026-03-11 00:55:05.356246 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.356253 | orchestrator | 2026-03-11 00:55:05.356260 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-11 00:55:05.356266 | orchestrator | Wednesday 11 March 2026 00:53:43 +0000 (0:00:01.347) 0:04:51.924 ******* 2026-03-11 00:55:05.356274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:55:05.356298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:55:05.356310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:55:05.356315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:55:05.356320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:55:05.356335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:55:05.356343 | orchestrator | 2026-03-11 00:55:05.356347 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-11 00:55:05.356351 | orchestrator | Wednesday 11 March 2026 00:53:49 +0000 (0:00:05.769) 0:04:57.694 ******* 2026-03-11 00:55:05.356357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:55:05.356361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:55:05.356366 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.356370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:55:05.356384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:55:05.356392 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.356400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:55:05.356404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:55:05.356409 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.356412 | orchestrator | 2026-03-11 00:55:05.356416 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-11 00:55:05.356420 | orchestrator | Wednesday 11 March 2026 00:53:50 +0000 (0:00:00.610) 0:04:58.305 ******* 2026-03-11 00:55:05.356424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-11 00:55:05.356428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-11 00:55:05.356432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-11 00:55:05.356439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-11 00:55:05.356443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-11 00:55:05.356449 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.356453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-11 00:55:05.356457 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.356461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-11 00:55:05.356475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-11 00:55:05.356480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-11 00:55:05.356486 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.356490 | orchestrator | 2026-03-11 00:55:05.356494 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-11 00:55:05.356498 | orchestrator | Wednesday 11 March 2026 00:53:51 +0000 (0:00:00.872) 0:04:59.177 ******* 2026-03-11 00:55:05.356502 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.356505 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.356509 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.356513 | orchestrator | 2026-03-11 00:55:05.356517 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-11 00:55:05.356521 | orchestrator | Wednesday 11 March 2026 00:53:51 +0000 (0:00:00.648) 0:04:59.825 ******* 2026-03-11 00:55:05.356524 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.356528 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.356532 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.356536 | orchestrator | 2026-03-11 00:55:05.356540 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-11 00:55:05.356543 | orchestrator | Wednesday 11 March 2026 00:53:52 +0000 (0:00:01.120) 0:05:00.946 ******* 2026-03-11 00:55:05.356547 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.356551 | orchestrator | 2026-03-11 00:55:05.356555 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-11 00:55:05.356559 | orchestrator | Wednesday 11 March 2026 00:53:54 +0000 (0:00:01.296) 0:05:02.243 ******* 2026-03-11 00:55:05.356563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-11 00:55:05.356567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-11 00:55:05.356577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 00:55:05.356584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 00:55:05.356611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 00:55:05.356636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 00:55:05.356640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-11 00:55:05.356657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 00:55:05.356673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 00:55:05.356695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-11 00:55:05.356703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-11 00:55:05.356709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 00:55:05.356723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-11 00:55:05.356730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-11 00:55:05.356734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 00:55:05.356750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-11 00:55:05.356754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-11 00:55:05.356802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 00:55:05.356817 | orchestrator | 2026-03-11 00:55:05.356824 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-11 00:55:05.356828 | orchestrator | Wednesday 11 March 2026 00:53:58 +0000 (0:00:03.963) 0:05:06.207 ******* 2026-03-11 00:55:05.356834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-11 00:55:05.356838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 00:55:05.356842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-11 00:55:05.356854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 00:55:05.356864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 00:55:05.356870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-11 00:55:05.356885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-11 00:55:05.356899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 00:55:05.356908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-11 00:55:05.356935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 00:55:05.356942 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.356948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-11 00:55:05.356955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-11 00:55:05.356966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 00:55:05.356979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.356990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 00:55:05.356994 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.356998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.357002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 00:55:05.357008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-11 00:55:05.357015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-11 00:55:05.357021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.357025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:55:05.357029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 00:55:05.357033 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357037 | orchestrator | 2026-03-11 00:55:05.357043 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-11 00:55:05.357047 | orchestrator | Wednesday 11 March 2026 00:53:59 +0000 (0:00:01.377) 0:05:07.585 ******* 2026-03-11 00:55:05.357051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-11 00:55:05.357055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-11 00:55:05.357059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-11 00:55:05.357063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-11 00:55:05.357067 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.357073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-11 00:55:05.357079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-11 00:55:05.357086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-11 00:55:05.357090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-11 00:55:05.357094 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.357098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-11 00:55:05.357102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-11 00:55:05.357106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-11 00:55:05.357110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-11 00:55:05.357113 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357117 | orchestrator | 2026-03-11 00:55:05.357121 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-11 00:55:05.357125 | orchestrator | Wednesday 11 March 2026 00:54:00 +0000 (0:00:00.962) 0:05:08.547 ******* 2026-03-11 00:55:05.357129 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.357132 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.357136 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357140 | orchestrator | 2026-03-11 00:55:05.357145 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-11 00:55:05.357151 | orchestrator | Wednesday 11 March 2026 00:54:01 +0000 (0:00:00.442) 0:05:08.990 ******* 2026-03-11 00:55:05.357158 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.357164 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.357171 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357177 | orchestrator | 2026-03-11 00:55:05.357182 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-11 00:55:05.357186 | orchestrator | Wednesday 11 March 2026 00:54:02 +0000 (0:00:01.423) 0:05:10.413 ******* 2026-03-11 00:55:05.357190 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.357194 | orchestrator | 2026-03-11 00:55:05.357198 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-11 00:55:05.357201 | orchestrator | Wednesday 11 March 2026 00:54:04 +0000 (0:00:01.668) 0:05:12.081 ******* 2026-03-11 00:55:05.357208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:55:05.357218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:55:05.357222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:55:05.357226 | orchestrator | 2026-03-11 00:55:05.357230 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-11 00:55:05.357234 | orchestrator | Wednesday 11 March 2026 00:54:06 +0000 (0:00:02.520) 0:05:14.602 ******* 2026-03-11 00:55:05.357238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-11 00:55:05.357242 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.357248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-11 00:55:05.357254 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.357260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-11 00:55:05.357265 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357268 | orchestrator | 2026-03-11 00:55:05.357272 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-11 00:55:05.357276 | orchestrator | Wednesday 11 March 2026 00:54:07 +0000 (0:00:00.381) 0:05:14.983 ******* 2026-03-11 00:55:05.357280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-11 00:55:05.357284 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.357288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-11 00:55:05.357291 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.357295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-11 00:55:05.357299 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357303 | orchestrator | 2026-03-11 00:55:05.357306 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-11 00:55:05.357310 | orchestrator | Wednesday 11 March 2026 00:54:08 +0000 (0:00:00.969) 0:05:15.953 ******* 2026-03-11 00:55:05.357314 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.357318 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.357321 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357325 | orchestrator | 2026-03-11 00:55:05.357329 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-11 00:55:05.357333 | orchestrator | Wednesday 11 March 2026 00:54:08 +0000 (0:00:00.430) 0:05:16.383 ******* 2026-03-11 00:55:05.357336 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.357340 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.357344 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357348 | orchestrator | 2026-03-11 00:55:05.357351 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-11 00:55:05.357355 | orchestrator | Wednesday 11 March 2026 00:54:09 +0000 (0:00:01.338) 0:05:17.722 ******* 2026-03-11 00:55:05.357361 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:55:05.357372 | orchestrator | 2026-03-11 00:55:05.357379 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-11 00:55:05.357384 | orchestrator | Wednesday 11 March 2026 00:54:11 +0000 (0:00:01.733) 0:05:19.456 ******* 2026-03-11 00:55:05.357391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.357405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.357412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.357419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.357432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.357442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-11 00:55:05.357449 | orchestrator | 2026-03-11 00:55:05.357459 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-11 00:55:05.357466 | orchestrator | Wednesday 11 March 2026 00:54:17 +0000 (0:00:06.327) 0:05:25.783 ******* 2026-03-11 00:55:05.357473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.357479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.357490 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.357494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.357500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.357505 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.357511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.357515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-11 00:55:05.357521 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357525 | orchestrator | 2026-03-11 00:55:05.357529 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-11 00:55:05.357533 | orchestrator | Wednesday 11 March 2026 00:54:18 +0000 (0:00:00.684) 0:05:26.467 ******* 2026-03-11 00:55:05.357539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-11 00:55:05.357546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-11 00:55:05.357552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-11 00:55:05.357558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-11 00:55:05.357565 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.357571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-11 00:55:05.357577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-11 00:55:05.357583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-11 00:55:05.357594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-11 00:55:05.357600 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.357607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-11 00:55:05.357614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-11 00:55:05.357618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-11 00:55:05.357622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-11 00:55:05.357626 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357630 | orchestrator | 2026-03-11 00:55:05.357634 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-11 00:55:05.357638 | orchestrator | Wednesday 11 March 2026 00:54:20 +0000 (0:00:01.682) 0:05:28.150 ******* 2026-03-11 00:55:05.357642 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.357646 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.357649 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.357656 | orchestrator | 2026-03-11 00:55:05.357660 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-11 00:55:05.357664 | orchestrator | Wednesday 11 March 2026 00:54:21 +0000 (0:00:01.346) 0:05:29.496 ******* 2026-03-11 00:55:05.357668 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.357671 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.357675 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.357679 | orchestrator | 2026-03-11 00:55:05.357682 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-11 00:55:05.357686 | orchestrator | Wednesday 11 March 2026 00:54:23 +0000 (0:00:02.154) 0:05:31.650 ******* 2026-03-11 00:55:05.357690 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.357694 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.357698 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357701 | orchestrator | 2026-03-11 00:55:05.357705 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-11 00:55:05.357709 | orchestrator | Wednesday 11 March 2026 00:54:24 +0000 (0:00:00.343) 0:05:31.994 ******* 2026-03-11 00:55:05.357713 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.357716 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.357720 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357724 | orchestrator | 2026-03-11 00:55:05.357728 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-11 00:55:05.357732 | orchestrator | Wednesday 11 March 2026 00:54:24 +0000 (0:00:00.325) 0:05:32.320 ******* 2026-03-11 00:55:05.357735 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.357739 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.357743 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357747 | orchestrator | 2026-03-11 00:55:05.357750 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-11 00:55:05.357754 | orchestrator | Wednesday 11 March 2026 00:54:25 +0000 (0:00:00.639) 0:05:32.960 ******* 2026-03-11 00:55:05.357758 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.357778 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.357782 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357786 | orchestrator | 2026-03-11 00:55:05.357790 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-11 00:55:05.357794 | orchestrator | Wednesday 11 March 2026 00:54:25 +0000 (0:00:00.313) 0:05:33.274 ******* 2026-03-11 00:55:05.357798 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.357801 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.357805 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357809 | orchestrator | 2026-03-11 00:55:05.357813 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-11 00:55:05.357816 | orchestrator | Wednesday 11 March 2026 00:54:25 +0000 (0:00:00.335) 0:05:33.610 ******* 2026-03-11 00:55:05.357820 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.357824 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.357828 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.357831 | orchestrator | 2026-03-11 00:55:05.357835 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-11 00:55:05.357839 | orchestrator | Wednesday 11 March 2026 00:54:26 +0000 (0:00:00.858) 0:05:34.468 ******* 2026-03-11 00:55:05.357843 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.357847 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.357850 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.357854 | orchestrator | 2026-03-11 00:55:05.357858 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-11 00:55:05.357862 | orchestrator | Wednesday 11 March 2026 00:54:27 +0000 (0:00:00.743) 0:05:35.212 ******* 2026-03-11 00:55:05.357865 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.357869 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.357873 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.357877 | orchestrator | 2026-03-11 00:55:05.357881 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-11 00:55:05.357887 | orchestrator | Wednesday 11 March 2026 00:54:27 +0000 (0:00:00.356) 0:05:35.568 ******* 2026-03-11 00:55:05.357891 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.357895 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.357898 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.357902 | orchestrator | 2026-03-11 00:55:05.357908 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-11 00:55:05.357912 | orchestrator | Wednesday 11 March 2026 00:54:28 +0000 (0:00:00.953) 0:05:36.522 ******* 2026-03-11 00:55:05.357916 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.357920 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.357924 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.357927 | orchestrator | 2026-03-11 00:55:05.357931 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-11 00:55:05.357935 | orchestrator | Wednesday 11 March 2026 00:54:29 +0000 (0:00:01.241) 0:05:37.764 ******* 2026-03-11 00:55:05.357939 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.357945 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.357949 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.357953 | orchestrator | 2026-03-11 00:55:05.357957 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-11 00:55:05.357961 | orchestrator | Wednesday 11 March 2026 00:54:30 +0000 (0:00:00.984) 0:05:38.748 ******* 2026-03-11 00:55:05.357964 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.357968 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.357974 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.357980 | orchestrator | 2026-03-11 00:55:05.357987 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-11 00:55:05.357994 | orchestrator | Wednesday 11 March 2026 00:54:38 +0000 (0:00:07.939) 0:05:46.687 ******* 2026-03-11 00:55:05.358001 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.358008 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.358034 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.358042 | orchestrator | 2026-03-11 00:55:05.358048 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-11 00:55:05.358055 | orchestrator | Wednesday 11 March 2026 00:54:39 +0000 (0:00:00.770) 0:05:47.457 ******* 2026-03-11 00:55:05.358061 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.358065 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.358069 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.358073 | orchestrator | 2026-03-11 00:55:05.358077 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-11 00:55:05.358081 | orchestrator | Wednesday 11 March 2026 00:54:47 +0000 (0:00:07.708) 0:05:55.166 ******* 2026-03-11 00:55:05.358085 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.358088 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.358092 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.358096 | orchestrator | 2026-03-11 00:55:05.358100 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-11 00:55:05.358104 | orchestrator | Wednesday 11 March 2026 00:54:50 +0000 (0:00:02.913) 0:05:58.079 ******* 2026-03-11 00:55:05.358107 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:55:05.358111 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:55:05.358115 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:55:05.358119 | orchestrator | 2026-03-11 00:55:05.358123 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-11 00:55:05.358127 | orchestrator | Wednesday 11 March 2026 00:54:53 +0000 (0:00:03.787) 0:06:01.867 ******* 2026-03-11 00:55:05.358131 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.358134 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.358138 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.358142 | orchestrator | 2026-03-11 00:55:05.358146 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-11 00:55:05.358150 | orchestrator | Wednesday 11 March 2026 00:54:54 +0000 (0:00:00.317) 0:06:02.185 ******* 2026-03-11 00:55:05.358157 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.358161 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.358164 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.358168 | orchestrator | 2026-03-11 00:55:05.358172 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-11 00:55:05.358176 | orchestrator | Wednesday 11 March 2026 00:54:54 +0000 (0:00:00.324) 0:06:02.509 ******* 2026-03-11 00:55:05.358180 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.358184 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.358187 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.358191 | orchestrator | 2026-03-11 00:55:05.358195 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-11 00:55:05.358199 | orchestrator | Wednesday 11 March 2026 00:54:55 +0000 (0:00:00.507) 0:06:03.017 ******* 2026-03-11 00:55:05.358203 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.358207 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.358210 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.358214 | orchestrator | 2026-03-11 00:55:05.358218 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-11 00:55:05.358222 | orchestrator | Wednesday 11 March 2026 00:54:55 +0000 (0:00:00.293) 0:06:03.311 ******* 2026-03-11 00:55:05.358226 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.358229 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.358233 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.358237 | orchestrator | 2026-03-11 00:55:05.358241 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-11 00:55:05.358245 | orchestrator | Wednesday 11 March 2026 00:54:55 +0000 (0:00:00.299) 0:06:03.611 ******* 2026-03-11 00:55:05.358249 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:55:05.358252 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:55:05.358256 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:55:05.358260 | orchestrator | 2026-03-11 00:55:05.358264 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-11 00:55:05.358268 | orchestrator | Wednesday 11 March 2026 00:54:55 +0000 (0:00:00.321) 0:06:03.932 ******* 2026-03-11 00:55:05.358271 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.358276 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.358280 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.358283 | orchestrator | 2026-03-11 00:55:05.358287 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-11 00:55:05.358291 | orchestrator | Wednesday 11 March 2026 00:55:00 +0000 (0:00:04.921) 0:06:08.853 ******* 2026-03-11 00:55:05.358295 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:55:05.358299 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:55:05.358303 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:55:05.358306 | orchestrator | 2026-03-11 00:55:05.358310 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:55:05.358318 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-11 00:55:05.358322 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-11 00:55:05.358329 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-11 00:55:05.358333 | orchestrator | 2026-03-11 00:55:05.358337 | orchestrator | 2026-03-11 00:55:05.358341 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:55:05.358344 | orchestrator | Wednesday 11 March 2026 00:55:01 +0000 (0:00:00.791) 0:06:09.645 ******* 2026-03-11 00:55:05.358348 | orchestrator | =============================================================================== 2026-03-11 00:55:05.358352 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.94s 2026-03-11 00:55:05.358358 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 7.71s 2026-03-11 00:55:05.358362 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.33s 2026-03-11 00:55:05.358366 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.77s 2026-03-11 00:55:05.358370 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.96s 2026-03-11 00:55:05.358374 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.92s 2026-03-11 00:55:05.358377 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.90s 2026-03-11 00:55:05.358381 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.72s 2026-03-11 00:55:05.358385 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.63s 2026-03-11 00:55:05.358388 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.27s 2026-03-11 00:55:05.358392 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.17s 2026-03-11 00:55:05.358396 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.16s 2026-03-11 00:55:05.358400 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.01s 2026-03-11 00:55:05.358403 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.97s 2026-03-11 00:55:05.358407 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.96s 2026-03-11 00:55:05.358411 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.89s 2026-03-11 00:55:05.358415 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.80s 2026-03-11 00:55:05.358418 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 3.79s 2026-03-11 00:55:05.358422 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.68s 2026-03-11 00:55:05.358426 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 3.59s 2026-03-11 00:55:05.358430 | orchestrator | 2026-03-11 00:55:05 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:05.358434 | orchestrator | 2026-03-11 00:55:05 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:05.358438 | orchestrator | 2026-03-11 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:08.380347 | orchestrator | 2026-03-11 00:55:08 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:08.381859 | orchestrator | 2026-03-11 00:55:08 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:08.385640 | orchestrator | 2026-03-11 00:55:08 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:08.385688 | orchestrator | 2026-03-11 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:11.420199 | orchestrator | 2026-03-11 00:55:11 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:11.420283 | orchestrator | 2026-03-11 00:55:11 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:11.421305 | orchestrator | 2026-03-11 00:55:11 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:11.421361 | orchestrator | 2026-03-11 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:14.460887 | orchestrator | 2026-03-11 00:55:14 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:14.461548 | orchestrator | 2026-03-11 00:55:14 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:14.462310 | orchestrator | 2026-03-11 00:55:14 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:14.462424 | orchestrator | 2026-03-11 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:17.499060 | orchestrator | 2026-03-11 00:55:17 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:17.501107 | orchestrator | 2026-03-11 00:55:17 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:17.503431 | orchestrator | 2026-03-11 00:55:17 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:17.503579 | orchestrator | 2026-03-11 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:20.538859 | orchestrator | 2026-03-11 00:55:20 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:20.539345 | orchestrator | 2026-03-11 00:55:20 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:20.540252 | orchestrator | 2026-03-11 00:55:20 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:20.540281 | orchestrator | 2026-03-11 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:23.572451 | orchestrator | 2026-03-11 00:55:23 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:23.572610 | orchestrator | 2026-03-11 00:55:23 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:23.574455 | orchestrator | 2026-03-11 00:55:23 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:23.574510 | orchestrator | 2026-03-11 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:26.606268 | orchestrator | 2026-03-11 00:55:26 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:26.606534 | orchestrator | 2026-03-11 00:55:26 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:26.607529 | orchestrator | 2026-03-11 00:55:26 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:26.607566 | orchestrator | 2026-03-11 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:29.641327 | orchestrator | 2026-03-11 00:55:29 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:29.641419 | orchestrator | 2026-03-11 00:55:29 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:29.641708 | orchestrator | 2026-03-11 00:55:29 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:29.641720 | orchestrator | 2026-03-11 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:32.682652 | orchestrator | 2026-03-11 00:55:32 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:32.682952 | orchestrator | 2026-03-11 00:55:32 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:32.683320 | orchestrator | 2026-03-11 00:55:32 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:32.683383 | orchestrator | 2026-03-11 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:35.711309 | orchestrator | 2026-03-11 00:55:35 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:35.713805 | orchestrator | 2026-03-11 00:55:35 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:35.720587 | orchestrator | 2026-03-11 00:55:35 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:35.720635 | orchestrator | 2026-03-11 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:38.753091 | orchestrator | 2026-03-11 00:55:38 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:38.754085 | orchestrator | 2026-03-11 00:55:38 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:38.755899 | orchestrator | 2026-03-11 00:55:38 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:38.755942 | orchestrator | 2026-03-11 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:41.792335 | orchestrator | 2026-03-11 00:55:41 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:41.793726 | orchestrator | 2026-03-11 00:55:41 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:41.795067 | orchestrator | 2026-03-11 00:55:41 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:41.795116 | orchestrator | 2026-03-11 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:44.841048 | orchestrator | 2026-03-11 00:55:44 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:44.844672 | orchestrator | 2026-03-11 00:55:44 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:44.846118 | orchestrator | 2026-03-11 00:55:44 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:44.846163 | orchestrator | 2026-03-11 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:47.886973 | orchestrator | 2026-03-11 00:55:47 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:47.888429 | orchestrator | 2026-03-11 00:55:47 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:47.889978 | orchestrator | 2026-03-11 00:55:47 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:47.890051 | orchestrator | 2026-03-11 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:50.915143 | orchestrator | 2026-03-11 00:55:50 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:50.916565 | orchestrator | 2026-03-11 00:55:50 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:50.916616 | orchestrator | 2026-03-11 00:55:50 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:50.916625 | orchestrator | 2026-03-11 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:53.957036 | orchestrator | 2026-03-11 00:55:53 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:53.959067 | orchestrator | 2026-03-11 00:55:53 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:53.961150 | orchestrator | 2026-03-11 00:55:53 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:53.961270 | orchestrator | 2026-03-11 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:57.011570 | orchestrator | 2026-03-11 00:55:57 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:55:57.014883 | orchestrator | 2026-03-11 00:55:57 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:55:57.015592 | orchestrator | 2026-03-11 00:55:57 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:55:57.015749 | orchestrator | 2026-03-11 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:00.054631 | orchestrator | 2026-03-11 00:56:00 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:00.058568 | orchestrator | 2026-03-11 00:56:00 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:00.061313 | orchestrator | 2026-03-11 00:56:00 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:00.061618 | orchestrator | 2026-03-11 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:03.114576 | orchestrator | 2026-03-11 00:56:03 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:03.116103 | orchestrator | 2026-03-11 00:56:03 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:03.118920 | orchestrator | 2026-03-11 00:56:03 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:03.118970 | orchestrator | 2026-03-11 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:06.171268 | orchestrator | 2026-03-11 00:56:06 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:06.173025 | orchestrator | 2026-03-11 00:56:06 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:06.174173 | orchestrator | 2026-03-11 00:56:06 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:06.174368 | orchestrator | 2026-03-11 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:09.216183 | orchestrator | 2026-03-11 00:56:09 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:09.216468 | orchestrator | 2026-03-11 00:56:09 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:09.217206 | orchestrator | 2026-03-11 00:56:09 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:09.217677 | orchestrator | 2026-03-11 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:12.262860 | orchestrator | 2026-03-11 00:56:12 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:12.264297 | orchestrator | 2026-03-11 00:56:12 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:12.266230 | orchestrator | 2026-03-11 00:56:12 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:12.266268 | orchestrator | 2026-03-11 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:15.310280 | orchestrator | 2026-03-11 00:56:15 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:15.310367 | orchestrator | 2026-03-11 00:56:15 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:15.311085 | orchestrator | 2026-03-11 00:56:15 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:15.311127 | orchestrator | 2026-03-11 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:18.347901 | orchestrator | 2026-03-11 00:56:18 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:18.349560 | orchestrator | 2026-03-11 00:56:18 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:18.351569 | orchestrator | 2026-03-11 00:56:18 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:18.351622 | orchestrator | 2026-03-11 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:21.397320 | orchestrator | 2026-03-11 00:56:21 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:21.398963 | orchestrator | 2026-03-11 00:56:21 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:21.401479 | orchestrator | 2026-03-11 00:56:21 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:21.401522 | orchestrator | 2026-03-11 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:24.448505 | orchestrator | 2026-03-11 00:56:24 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:24.453059 | orchestrator | 2026-03-11 00:56:24 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:24.454913 | orchestrator | 2026-03-11 00:56:24 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:24.455315 | orchestrator | 2026-03-11 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:27.491670 | orchestrator | 2026-03-11 00:56:27 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:27.492544 | orchestrator | 2026-03-11 00:56:27 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:27.494182 | orchestrator | 2026-03-11 00:56:27 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:27.494567 | orchestrator | 2026-03-11 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:30.534894 | orchestrator | 2026-03-11 00:56:30 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:30.536339 | orchestrator | 2026-03-11 00:56:30 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:30.538185 | orchestrator | 2026-03-11 00:56:30 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:30.538226 | orchestrator | 2026-03-11 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:33.576826 | orchestrator | 2026-03-11 00:56:33 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:33.578422 | orchestrator | 2026-03-11 00:56:33 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:33.580178 | orchestrator | 2026-03-11 00:56:33 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:33.580212 | orchestrator | 2026-03-11 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:36.623164 | orchestrator | 2026-03-11 00:56:36 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:36.624560 | orchestrator | 2026-03-11 00:56:36 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:36.625422 | orchestrator | 2026-03-11 00:56:36 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:36.625443 | orchestrator | 2026-03-11 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:39.668424 | orchestrator | 2026-03-11 00:56:39 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:39.670298 | orchestrator | 2026-03-11 00:56:39 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:39.672086 | orchestrator | 2026-03-11 00:56:39 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:39.672129 | orchestrator | 2026-03-11 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:42.712038 | orchestrator | 2026-03-11 00:56:42 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:42.713415 | orchestrator | 2026-03-11 00:56:42 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:42.714997 | orchestrator | 2026-03-11 00:56:42 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:42.715037 | orchestrator | 2026-03-11 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:45.765358 | orchestrator | 2026-03-11 00:56:45 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:45.765435 | orchestrator | 2026-03-11 00:56:45 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:45.766824 | orchestrator | 2026-03-11 00:56:45 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:45.766854 | orchestrator | 2026-03-11 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:48.819869 | orchestrator | 2026-03-11 00:56:48 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:48.822745 | orchestrator | 2026-03-11 00:56:48 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:48.824513 | orchestrator | 2026-03-11 00:56:48 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:48.824919 | orchestrator | 2026-03-11 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:51.872212 | orchestrator | 2026-03-11 00:56:51 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:51.874450 | orchestrator | 2026-03-11 00:56:51 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state STARTED 2026-03-11 00:56:51.876213 | orchestrator | 2026-03-11 00:56:51 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:51.876394 | orchestrator | 2026-03-11 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:54.921423 | orchestrator | 2026-03-11 00:56:54 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:54.921941 | orchestrator | 2026-03-11 00:56:54 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:56:54.930075 | orchestrator | 2026-03-11 00:56:54 | INFO  | Task 2a6a0f44-abc7-495e-82a0-413e0c165a6c is in state SUCCESS 2026-03-11 00:56:54.931222 | orchestrator | 2026-03-11 00:56:54.931263 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-11 00:56:54.931270 | orchestrator | 2.16.14 2026-03-11 00:56:54.931277 | orchestrator | 2026-03-11 00:56:54.931283 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-11 00:56:54.931289 | orchestrator | 2026-03-11 00:56:54.931294 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-11 00:56:54.931300 | orchestrator | Wednesday 11 March 2026 00:46:20 +0000 (0:00:00.709) 0:00:00.709 ******* 2026-03-11 00:56:54.931313 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.931319 | orchestrator | 2026-03-11 00:56:54.931324 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-11 00:56:54.931330 | orchestrator | Wednesday 11 March 2026 00:46:21 +0000 (0:00:00.992) 0:00:01.701 ******* 2026-03-11 00:56:54.931334 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.931338 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.931341 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.931344 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.931348 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.931351 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.931354 | orchestrator | 2026-03-11 00:56:54.931358 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-11 00:56:54.931361 | orchestrator | Wednesday 11 March 2026 00:46:22 +0000 (0:00:01.358) 0:00:03.060 ******* 2026-03-11 00:56:54.931376 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.931379 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.931382 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.931386 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.931391 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.931396 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.931401 | orchestrator | 2026-03-11 00:56:54.931405 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-11 00:56:54.931410 | orchestrator | Wednesday 11 March 2026 00:46:23 +0000 (0:00:00.852) 0:00:03.912 ******* 2026-03-11 00:56:54.931416 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.931421 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.931427 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.931432 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.931437 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.931443 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.931448 | orchestrator | 2026-03-11 00:56:54.931452 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-11 00:56:54.931457 | orchestrator | Wednesday 11 March 2026 00:46:24 +0000 (0:00:01.345) 0:00:05.258 ******* 2026-03-11 00:56:54.931462 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.931466 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.931474 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.931480 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.931485 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.931499 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.931504 | orchestrator | 2026-03-11 00:56:54.931509 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-11 00:56:54.931514 | orchestrator | Wednesday 11 March 2026 00:46:25 +0000 (0:00:00.706) 0:00:05.964 ******* 2026-03-11 00:56:54.931519 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.931524 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.931529 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.931534 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.931539 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.931544 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.931549 | orchestrator | 2026-03-11 00:56:54.931555 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-11 00:56:54.931560 | orchestrator | Wednesday 11 March 2026 00:46:25 +0000 (0:00:00.660) 0:00:06.625 ******* 2026-03-11 00:56:54.931565 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.931570 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.931841 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.931846 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.931849 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.931883 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.931887 | orchestrator | 2026-03-11 00:56:54.931891 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-11 00:56:54.931894 | orchestrator | Wednesday 11 March 2026 00:46:27 +0000 (0:00:01.117) 0:00:07.742 ******* 2026-03-11 00:56:54.931898 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.931902 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.931905 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.931908 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.931911 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.931914 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.931917 | orchestrator | 2026-03-11 00:56:54.931920 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-11 00:56:54.931923 | orchestrator | Wednesday 11 March 2026 00:46:27 +0000 (0:00:00.668) 0:00:08.411 ******* 2026-03-11 00:56:54.931927 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.931930 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.931933 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.931936 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.931939 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.931943 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.931953 | orchestrator | 2026-03-11 00:56:54.931957 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-11 00:56:54.931960 | orchestrator | Wednesday 11 March 2026 00:46:28 +0000 (0:00:00.817) 0:00:09.229 ******* 2026-03-11 00:56:54.931963 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:56:54.931966 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:56:54.931969 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:56:54.931972 | orchestrator | 2026-03-11 00:56:54.931975 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-11 00:56:54.931979 | orchestrator | Wednesday 11 March 2026 00:46:29 +0000 (0:00:00.676) 0:00:09.905 ******* 2026-03-11 00:56:54.931982 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.931985 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.931988 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.931999 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.932002 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.932005 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.932008 | orchestrator | 2026-03-11 00:56:54.932012 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-11 00:56:54.932015 | orchestrator | Wednesday 11 March 2026 00:46:29 +0000 (0:00:00.742) 0:00:10.648 ******* 2026-03-11 00:56:54.932018 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:56:54.932021 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:56:54.932024 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:56:54.932027 | orchestrator | 2026-03-11 00:56:54.932031 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-11 00:56:54.932034 | orchestrator | Wednesday 11 March 2026 00:46:32 +0000 (0:00:02.262) 0:00:12.911 ******* 2026-03-11 00:56:54.932037 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-11 00:56:54.932040 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-11 00:56:54.932043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-11 00:56:54.932047 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932050 | orchestrator | 2026-03-11 00:56:54.932053 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-11 00:56:54.932056 | orchestrator | Wednesday 11 March 2026 00:46:33 +0000 (0:00:00.814) 0:00:13.725 ******* 2026-03-11 00:56:54.932061 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.932068 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.932073 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.932076 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932080 | orchestrator | 2026-03-11 00:56:54.932083 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-11 00:56:54.932090 | orchestrator | Wednesday 11 March 2026 00:46:34 +0000 (0:00:01.110) 0:00:14.836 ******* 2026-03-11 00:56:54.932095 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.932104 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.932107 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.932110 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932113 | orchestrator | 2026-03-11 00:56:54.932117 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-11 00:56:54.932120 | orchestrator | Wednesday 11 March 2026 00:46:34 +0000 (0:00:00.433) 0:00:15.269 ******* 2026-03-11 00:56:54.932127 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-11 00:46:30.790740', 'end': '2026-03-11 00:46:30.856106', 'delta': '0:00:00.065366', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.932132 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-11 00:46:31.492777', 'end': '2026-03-11 00:46:31.578576', 'delta': '0:00:00.085799', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.932136 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-11 00:46:32.035078', 'end': '2026-03-11 00:46:32.123749', 'delta': '0:00:00.088671', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.932139 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932142 | orchestrator | 2026-03-11 00:56:54.932145 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-11 00:56:54.932148 | orchestrator | Wednesday 11 March 2026 00:46:34 +0000 (0:00:00.287) 0:00:15.557 ******* 2026-03-11 00:56:54.932151 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.932157 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.932161 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.932164 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.932169 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.932172 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.932175 | orchestrator | 2026-03-11 00:56:54.932178 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-11 00:56:54.932181 | orchestrator | Wednesday 11 March 2026 00:46:36 +0000 (0:00:01.270) 0:00:16.827 ******* 2026-03-11 00:56:54.932184 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:54.932188 | orchestrator | 2026-03-11 00:56:54.932191 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-11 00:56:54.932448 | orchestrator | Wednesday 11 March 2026 00:46:36 +0000 (0:00:00.585) 0:00:17.412 ******* 2026-03-11 00:56:54.932454 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932459 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.932464 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.932470 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.932475 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.932480 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.932485 | orchestrator | 2026-03-11 00:56:54.932490 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-11 00:56:54.932495 | orchestrator | Wednesday 11 March 2026 00:46:37 +0000 (0:00:01.214) 0:00:18.627 ******* 2026-03-11 00:56:54.932500 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932550 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.932554 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.932557 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.932560 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.932563 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.932566 | orchestrator | 2026-03-11 00:56:54.932569 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-11 00:56:54.932573 | orchestrator | Wednesday 11 March 2026 00:46:39 +0000 (0:00:01.756) 0:00:20.384 ******* 2026-03-11 00:56:54.932576 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932579 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.932582 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.932585 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.932589 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.932592 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.932595 | orchestrator | 2026-03-11 00:56:54.932598 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-11 00:56:54.932601 | orchestrator | Wednesday 11 March 2026 00:46:41 +0000 (0:00:01.591) 0:00:21.975 ******* 2026-03-11 00:56:54.932604 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932607 | orchestrator | 2026-03-11 00:56:54.932611 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-11 00:56:54.932646 | orchestrator | Wednesday 11 March 2026 00:46:41 +0000 (0:00:00.126) 0:00:22.101 ******* 2026-03-11 00:56:54.932651 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932656 | orchestrator | 2026-03-11 00:56:54.932660 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-11 00:56:54.932665 | orchestrator | Wednesday 11 March 2026 00:46:41 +0000 (0:00:00.244) 0:00:22.346 ******* 2026-03-11 00:56:54.932670 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932675 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.932694 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.932713 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.932717 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.932720 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.932723 | orchestrator | 2026-03-11 00:56:54.932726 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-11 00:56:54.932729 | orchestrator | Wednesday 11 March 2026 00:46:42 +0000 (0:00:00.816) 0:00:23.162 ******* 2026-03-11 00:56:54.932742 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932745 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.932748 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.932751 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.932754 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.932758 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.932764 | orchestrator | 2026-03-11 00:56:54.932769 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-11 00:56:54.932774 | orchestrator | Wednesday 11 March 2026 00:46:44 +0000 (0:00:01.848) 0:00:25.011 ******* 2026-03-11 00:56:54.932779 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932785 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.932790 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.932796 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.932801 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.932805 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.932808 | orchestrator | 2026-03-11 00:56:54.932811 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-11 00:56:54.932814 | orchestrator | Wednesday 11 March 2026 00:46:45 +0000 (0:00:00.900) 0:00:25.912 ******* 2026-03-11 00:56:54.932817 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.932820 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932823 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.932827 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.932830 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.932833 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.932836 | orchestrator | 2026-03-11 00:56:54.932839 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-11 00:56:54.932842 | orchestrator | Wednesday 11 March 2026 00:46:46 +0000 (0:00:01.483) 0:00:27.395 ******* 2026-03-11 00:56:54.932845 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932849 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.932852 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.932855 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.932858 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.932861 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.932864 | orchestrator | 2026-03-11 00:56:54.932867 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-11 00:56:54.932870 | orchestrator | Wednesday 11 March 2026 00:46:47 +0000 (0:00:00.901) 0:00:28.297 ******* 2026-03-11 00:56:54.932873 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932877 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.932884 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.932887 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.932890 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.932893 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.932896 | orchestrator | 2026-03-11 00:56:54.932899 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-11 00:56:54.932903 | orchestrator | Wednesday 11 March 2026 00:46:48 +0000 (0:00:00.835) 0:00:29.132 ******* 2026-03-11 00:56:54.932906 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.932909 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.932913 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.932916 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.932919 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.932922 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.932954 | orchestrator | 2026-03-11 00:56:54.932958 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-11 00:56:54.932961 | orchestrator | Wednesday 11 March 2026 00:46:48 +0000 (0:00:00.521) 0:00:29.654 ******* 2026-03-11 00:56:54.932965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1f24027a--cb62--5112--a2b4--0ff1a158a780-osd--block--1f24027a--cb62--5112--a2b4--0ff1a158a780', 'dm-uuid-LVM-VhTvUy8RvGHmgQbSGejj2cFr5C79WFT6Sw4HHKX2gQ9Zm965zwcEXUzxkMLrdzNW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.932973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--930a51f3--082d--5f24--af57--1314a0ff4b68-osd--block--930a51f3--082d--5f24--af57--1314a0ff4b68', 'dm-uuid-LVM-Ibnvjb7qiyL3oKlGZEawB6I1PxbXAVvpsHGJ4HPJaZl9NC2bCMa0fe5u5ROaJIBl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.932987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part1', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part14', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part15', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part16', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1f24027a--cb62--5112--a2b4--0ff1a158a780-osd--block--1f24027a--cb62--5112--a2b4--0ff1a158a780'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sTmUoR-Ut6J-4hP1-1GLB-Jxdn-0eBV-X9DQAQ', 'scsi-0QEMU_QEMU_HARDDISK_bb163787-5642-41ea-bb50-14394c4239c7', 'scsi-SQEMU_QEMU_HARDDISK_bb163787-5642-41ea-bb50-14394c4239c7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--930a51f3--082d--5f24--af57--1314a0ff4b68-osd--block--930a51f3--082d--5f24--af57--1314a0ff4b68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VS3hfm-tDrl-9AMM-2hPw-Q0ky-zJOF-9LCQvj', 'scsi-0QEMU_QEMU_HARDDISK_ac11d5e4-d53a-4ea4-b7b7-81bfc32957e4', 'scsi-SQEMU_QEMU_HARDDISK_ac11d5e4-d53a-4ea4-b7b7-81bfc32957e4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef09bb17-59a8-4317-bed7-0146c94a1062', 'scsi-SQEMU_QEMU_HARDDISK_ef09bb17-59a8-4317-bed7-0146c94a1062'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9a64462a--5614--5a25--979d--2f017565a0c4-osd--block--9a64462a--5614--5a25--979d--2f017565a0c4', 'dm-uuid-LVM-ayxYQM6BgxOnDbQpTfY36B6k6R58GQx52b9wUaDw5kmGghdJfV78isyTrF2Db4mX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e6773b3--a2d9--5476--8e14--434a68284534-osd--block--9e6773b3--a2d9--5476--8e14--434a68284534', 'dm-uuid-LVM-zXdwVZqaatHAISu1ScQeMh8An0eym0d9aeSkX7kRNauWhsMRMPGSMzb91ZF2UJf3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933440 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.933443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part1', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part14', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part15', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part16', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9a64462a--5614--5a25--979d--2f017565a0c4-osd--block--9a64462a--5614--5a25--979d--2f017565a0c4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3lUuww-Veet-z76Z-cWCI-ccba-Waub-32H1PZ', 'scsi-0QEMU_QEMU_HARDDISK_747dd4bc-1e4a-4053-bdf0-887e0b92b80b', 'scsi-SQEMU_QEMU_HARDDISK_747dd4bc-1e4a-4053-bdf0-887e0b92b80b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9e6773b3--a2d9--5476--8e14--434a68284534-osd--block--9e6773b3--a2d9--5476--8e14--434a68284534'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ttRbEm-RD1J-jehV-cszL-zUf6-jVNf-8qcgVJ', 'scsi-0QEMU_QEMU_HARDDISK_1656cb8a-d6e3-4504-aba1-0af808046f0d', 'scsi-SQEMU_QEMU_HARDDISK_1656cb8a-d6e3-4504-aba1-0af808046f0d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37fe87c5-ca63-4522-b75b-0d9e996155b4', 'scsi-SQEMU_QEMU_HARDDISK_37fe87c5-ca63-4522-b75b-0d9e996155b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9-osd--block--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9', 'dm-uuid-LVM-RwBigCbtDnPNtpLNd3NQBMoVopg18EfqpOkfQGT603HfLPQy3J2C48eLgkQMUYmY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--12aec0f2--63b1--5667--a447--7095f264ece1-osd--block--12aec0f2--63b1--5667--a447--7095f264ece1', 'dm-uuid-LVM-eLU561C8FCWuxkw37i12AU1RPhNNcWoCbCF5MFGSO9qg37pntjfArU8cBAYHmszD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933536 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.933540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part1', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part14', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part15', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part16', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9-osd--block--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WqRIbN-wezn-9aAS-9Bct-7SUf-mOKz-kuNUw2', 'scsi-0QEMU_QEMU_HARDDISK_cd4ac081-6fbb-4e27-9e74-8104c0078ac5', 'scsi-SQEMU_QEMU_HARDDISK_cd4ac081-6fbb-4e27-9e74-8104c0078ac5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--12aec0f2--63b1--5667--a447--7095f264ece1-osd--block--12aec0f2--63b1--5667--a447--7095f264ece1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3xMrj1-W0UW-AeFs-gIlM-Xkde-1FKU-FN31Yv', 'scsi-0QEMU_QEMU_HARDDISK_3907c798-8bb0-4366-8422-7f195107ce20', 'scsi-SQEMU_QEMU_HARDDISK_3907c798-8bb0-4366-8422-7f195107ce20'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c68b4db-9517-4776-878e-5cc78b8cffbb', 'scsi-SQEMU_QEMU_HARDDISK_7c68b4db-9517-4776-878e-5cc78b8cffbb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78', 'scsi-SQEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part1', 'scsi-SQEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part14', 'scsi-SQEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part15', 'scsi-SQEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part16', 'scsi-SQEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933865 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.933890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb', 'scsi-SQEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933899 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.933904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.933908 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.933911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.933967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.934011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.934073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:54.934099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24', 'scsi-SQEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part1', 'scsi-SQEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part14', 'scsi-SQEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part15', 'scsi-SQEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part16', 'scsi-SQEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.934141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:54.934146 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.934149 | orchestrator | 2026-03-11 00:56:54.934153 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-11 00:56:54.934161 | orchestrator | Wednesday 11 March 2026 00:46:50 +0000 (0:00:01.846) 0:00:31.501 ******* 2026-03-11 00:56:54.934165 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1f24027a--cb62--5112--a2b4--0ff1a158a780-osd--block--1f24027a--cb62--5112--a2b4--0ff1a158a780', 'dm-uuid-LVM-VhTvUy8RvGHmgQbSGejj2cFr5C79WFT6Sw4HHKX2gQ9Zm965zwcEXUzxkMLrdzNW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934171 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--930a51f3--082d--5f24--af57--1314a0ff4b68-osd--block--930a51f3--082d--5f24--af57--1314a0ff4b68', 'dm-uuid-LVM-Ibnvjb7qiyL3oKlGZEawB6I1PxbXAVvpsHGJ4HPJaZl9NC2bCMa0fe5u5ROaJIBl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934175 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934178 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934181 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934193 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934199 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934202 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934208 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934478 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934536 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part1', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part14', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part15', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part16', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934553 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9a64462a--5614--5a25--979d--2f017565a0c4-osd--block--9a64462a--5614--5a25--979d--2f017565a0c4', 'dm-uuid-LVM-ayxYQM6BgxOnDbQpTfY36B6k6R58GQx52b9wUaDw5kmGghdJfV78isyTrF2Db4mX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934563 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e6773b3--a2d9--5476--8e14--434a68284534-osd--block--9e6773b3--a2d9--5476--8e14--434a68284534', 'dm-uuid-LVM-zXdwVZqaatHAISu1ScQeMh8An0eym0d9aeSkX7kRNauWhsMRMPGSMzb91ZF2UJf3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934569 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1f24027a--cb62--5112--a2b4--0ff1a158a780-osd--block--1f24027a--cb62--5112--a2b4--0ff1a158a780'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sTmUoR-Ut6J-4hP1-1GLB-Jxdn-0eBV-X9DQAQ', 'scsi-0QEMU_QEMU_HARDDISK_bb163787-5642-41ea-bb50-14394c4239c7', 'scsi-SQEMU_QEMU_HARDDISK_bb163787-5642-41ea-bb50-14394c4239c7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934605 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934613 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--930a51f3--082d--5f24--af57--1314a0ff4b68-osd--block--930a51f3--082d--5f24--af57--1314a0ff4b68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VS3hfm-tDrl-9AMM-2hPw-Q0ky-zJOF-9LCQvj', 'scsi-0QEMU_QEMU_HARDDISK_ac11d5e4-d53a-4ea4-b7b7-81bfc32957e4', 'scsi-SQEMU_QEMU_HARDDISK_ac11d5e4-d53a-4ea4-b7b7-81bfc32957e4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934617 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef09bb17-59a8-4317-bed7-0146c94a1062', 'scsi-SQEMU_QEMU_HARDDISK_ef09bb17-59a8-4317-bed7-0146c94a1062'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934622 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934626 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934630 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934658 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934663 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934666 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934672 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934675 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934695 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9-osd--block--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9', 'dm-uuid-LVM-RwBigCbtDnPNtpLNd3NQBMoVopg18EfqpOkfQGT603HfLPQy3J2C48eLgkQMUYmY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934722 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part1', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part14', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part15', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part16', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934732 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--12aec0f2--63b1--5667--a447--7095f264ece1-osd--block--12aec0f2--63b1--5667--a447--7095f264ece1', 'dm-uuid-LVM-eLU561C8FCWuxkw37i12AU1RPhNNcWoCbCF5MFGSO9qg37pntjfArU8cBAYHmszD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934735 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9a64462a--5614--5a25--979d--2f017565a0c4-osd--block--9a64462a--5614--5a25--979d--2f017565a0c4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3lUuww-Veet-z76Z-cWCI-ccba-Waub-32H1PZ', 'scsi-0QEMU_QEMU_HARDDISK_747dd4bc-1e4a-4053-bdf0-887e0b92b80b', 'scsi-SQEMU_QEMU_HARDDISK_747dd4bc-1e4a-4053-bdf0-887e0b92b80b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934739 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934766 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9e6773b3--a2d9--5476--8e14--434a68284534-osd--block--9e6773b3--a2d9--5476--8e14--434a68284534'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ttRbEm-RD1J-jehV-cszL-zUf6-jVNf-8qcgVJ', 'scsi-0QEMU_QEMU_HARDDISK_1656cb8a-d6e3-4504-aba1-0af808046f0d', 'scsi-SQEMU_QEMU_HARDDISK_1656cb8a-d6e3-4504-aba1-0af808046f0d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934771 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37fe87c5-ca63-4522-b75b-0d9e996155b4', 'scsi-SQEMU_QEMU_HARDDISK_37fe87c5-ca63-4522-b75b-0d9e996155b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934777 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934780 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934784 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934817 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934838 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.934846 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934851 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934859 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934865 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934870 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part1', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part14', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part15', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part16', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934933 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9-osd--block--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WqRIbN-wezn-9aAS-9Bct-7SUf-mOKz-kuNUw2', 'scsi-0QEMU_QEMU_HARDDISK_cd4ac081-6fbb-4e27-9e74-8104c0078ac5', 'scsi-SQEMU_QEMU_HARDDISK_cd4ac081-6fbb-4e27-9e74-8104c0078ac5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934940 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.934952 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.934958 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935000 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935007 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--12aec0f2--63b1--5667--a447--7095f264ece1-osd--block--12aec0f2--63b1--5667--a447--7095f264ece1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3xMrj1-W0UW-AeFs-gIlM-Xkde-1FKU-FN31Yv', 'scsi-0QEMU_QEMU_HARDDISK_3907c798-8bb0-4366-8422-7f195107ce20', 'scsi-SQEMU_QEMU_HARDDISK_3907c798-8bb0-4366-8422-7f195107ce20'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935012 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935020 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935026 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935045 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c68b4db-9517-4776-878e-5cc78b8cffbb', 'scsi-SQEMU_QEMU_HARDDISK_7c68b4db-9517-4776-878e-5cc78b8cffbb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935107 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935114 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935119 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935128 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935134 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935143 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935189 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935198 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935204 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935213 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78', 'scsi-SQEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part1', 'scsi-SQEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part14', 'scsi-SQEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part15', 'scsi-SQEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part16', 'scsi-SQEMU_QEMU_HARDDISK_7ba47a05-150e-4018-97cd-15f15bf57c78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935248 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935253 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935259 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb', 'scsi-SQEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb3ac54a-2c40-49b5-b33f-f5a6ca0b26eb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935265 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.935289 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935294 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.935297 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.935300 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935304 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935309 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935312 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935318 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935321 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935355 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935364 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935373 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24', 'scsi-SQEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part1', 'scsi-SQEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part14', 'scsi-SQEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part15', 'scsi-SQEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part16', 'scsi-SQEMU_QEMU_HARDDISK_29a3254a-e175-4d08-87e3-0a6181614d24-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935382 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:54.935387 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.935401 | orchestrator | 2026-03-11 00:56:54.935440 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-11 00:56:54.935446 | orchestrator | Wednesday 11 March 2026 00:46:52 +0000 (0:00:01.376) 0:00:32.878 ******* 2026-03-11 00:56:54.935449 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.935453 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.935456 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.935459 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.935462 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.935465 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.935468 | orchestrator | 2026-03-11 00:56:54.935471 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-11 00:56:54.935475 | orchestrator | Wednesday 11 March 2026 00:46:53 +0000 (0:00:01.243) 0:00:34.122 ******* 2026-03-11 00:56:54.935478 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.935481 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.935484 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.935487 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.935490 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.935493 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.935496 | orchestrator | 2026-03-11 00:56:54.935499 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-11 00:56:54.935502 | orchestrator | Wednesday 11 March 2026 00:46:54 +0000 (0:00:00.611) 0:00:34.733 ******* 2026-03-11 00:56:54.935505 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.935509 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.935512 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.935515 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.935518 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.935521 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.935524 | orchestrator | 2026-03-11 00:56:54.935527 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-11 00:56:54.935530 | orchestrator | Wednesday 11 March 2026 00:46:54 +0000 (0:00:00.908) 0:00:35.642 ******* 2026-03-11 00:56:54.935537 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.935540 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.935543 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.935548 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.935553 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.935558 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.935569 | orchestrator | 2026-03-11 00:56:54.935595 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-11 00:56:54.935601 | orchestrator | Wednesday 11 March 2026 00:46:55 +0000 (0:00:00.963) 0:00:36.605 ******* 2026-03-11 00:56:54.935606 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.935611 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.935617 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.935621 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.935626 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.935632 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.935637 | orchestrator | 2026-03-11 00:56:54.935642 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-11 00:56:54.935652 | orchestrator | Wednesday 11 March 2026 00:46:57 +0000 (0:00:01.225) 0:00:37.831 ******* 2026-03-11 00:56:54.935656 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.935659 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.935669 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.935672 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.935675 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.935694 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.935698 | orchestrator | 2026-03-11 00:56:54.935701 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-11 00:56:54.935704 | orchestrator | Wednesday 11 March 2026 00:46:57 +0000 (0:00:00.566) 0:00:38.398 ******* 2026-03-11 00:56:54.935708 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-11 00:56:54.935711 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-11 00:56:54.935714 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-11 00:56:54.935717 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-11 00:56:54.935720 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-11 00:56:54.935724 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-11 00:56:54.935727 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-11 00:56:54.935730 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-11 00:56:54.935733 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-11 00:56:54.935736 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-11 00:56:54.935739 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-11 00:56:54.935742 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-11 00:56:54.935745 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-11 00:56:54.935748 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-11 00:56:54.935751 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-11 00:56:54.935754 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-11 00:56:54.935757 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-11 00:56:54.935760 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-11 00:56:54.935764 | orchestrator | 2026-03-11 00:56:54.935767 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-11 00:56:54.935770 | orchestrator | Wednesday 11 March 2026 00:47:01 +0000 (0:00:03.480) 0:00:41.879 ******* 2026-03-11 00:56:54.935773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-11 00:56:54.935776 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-11 00:56:54.935779 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-11 00:56:54.935782 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.935805 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-11 00:56:54.935808 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-11 00:56:54.935811 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-11 00:56:54.935815 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.935818 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-11 00:56:54.935859 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-11 00:56:54.935864 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-11 00:56:54.935867 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.935870 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-11 00:56:54.935873 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-11 00:56:54.935876 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-11 00:56:54.935879 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.935882 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-11 00:56:54.935886 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-11 00:56:54.935889 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-11 00:56:54.935892 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.935895 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-11 00:56:54.935898 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-11 00:56:54.935901 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-11 00:56:54.935904 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.935907 | orchestrator | 2026-03-11 00:56:54.935911 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-11 00:56:54.935914 | orchestrator | Wednesday 11 March 2026 00:47:02 +0000 (0:00:00.990) 0:00:42.869 ******* 2026-03-11 00:56:54.935917 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.935920 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.935923 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.935927 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.935931 | orchestrator | 2026-03-11 00:56:54.935934 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-11 00:56:54.935938 | orchestrator | Wednesday 11 March 2026 00:47:03 +0000 (0:00:01.494) 0:00:44.364 ******* 2026-03-11 00:56:54.935942 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.935945 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.935948 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.935951 | orchestrator | 2026-03-11 00:56:54.935954 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-11 00:56:54.935957 | orchestrator | Wednesday 11 March 2026 00:47:04 +0000 (0:00:00.488) 0:00:44.852 ******* 2026-03-11 00:56:54.935960 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.935963 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.935967 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.935970 | orchestrator | 2026-03-11 00:56:54.935975 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-11 00:56:54.935979 | orchestrator | Wednesday 11 March 2026 00:47:04 +0000 (0:00:00.378) 0:00:45.230 ******* 2026-03-11 00:56:54.935982 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.935985 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.935988 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.935991 | orchestrator | 2026-03-11 00:56:54.935994 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-11 00:56:54.935998 | orchestrator | Wednesday 11 March 2026 00:47:05 +0000 (0:00:00.781) 0:00:46.011 ******* 2026-03-11 00:56:54.936001 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.936007 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.936010 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.936013 | orchestrator | 2026-03-11 00:56:54.936016 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-11 00:56:54.936019 | orchestrator | Wednesday 11 March 2026 00:47:05 +0000 (0:00:00.477) 0:00:46.489 ******* 2026-03-11 00:56:54.936023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:54.936026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:54.936029 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:54.936032 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.936035 | orchestrator | 2026-03-11 00:56:54.936038 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-11 00:56:54.936041 | orchestrator | Wednesday 11 March 2026 00:47:06 +0000 (0:00:00.631) 0:00:47.120 ******* 2026-03-11 00:56:54.936044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:54.936048 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:54.936051 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:54.936054 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.936057 | orchestrator | 2026-03-11 00:56:54.936060 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-11 00:56:54.936063 | orchestrator | Wednesday 11 March 2026 00:47:07 +0000 (0:00:00.646) 0:00:47.767 ******* 2026-03-11 00:56:54.936066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:54.936070 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:54.936073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:54.936076 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.936079 | orchestrator | 2026-03-11 00:56:54.936082 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-11 00:56:54.936085 | orchestrator | Wednesday 11 March 2026 00:47:07 +0000 (0:00:00.420) 0:00:48.187 ******* 2026-03-11 00:56:54.936088 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.936091 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.936095 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.936098 | orchestrator | 2026-03-11 00:56:54.936101 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-11 00:56:54.936104 | orchestrator | Wednesday 11 March 2026 00:47:08 +0000 (0:00:00.482) 0:00:48.670 ******* 2026-03-11 00:56:54.936107 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-11 00:56:54.936110 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-11 00:56:54.936123 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-11 00:56:54.936127 | orchestrator | 2026-03-11 00:56:54.936130 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-11 00:56:54.936133 | orchestrator | Wednesday 11 March 2026 00:47:08 +0000 (0:00:00.961) 0:00:49.631 ******* 2026-03-11 00:56:54.936136 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:56:54.936147 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:56:54.936150 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:56:54.936153 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-11 00:56:54.936156 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-11 00:56:54.936159 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-11 00:56:54.936163 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-11 00:56:54.936166 | orchestrator | 2026-03-11 00:56:54.936169 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-11 00:56:54.936172 | orchestrator | Wednesday 11 March 2026 00:47:09 +0000 (0:00:00.682) 0:00:50.314 ******* 2026-03-11 00:56:54.936178 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:56:54.936182 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:56:54.936185 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:56:54.936188 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-11 00:56:54.936191 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-11 00:56:54.936194 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-11 00:56:54.936197 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-11 00:56:54.936200 | orchestrator | 2026-03-11 00:56:54.936203 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-11 00:56:54.936206 | orchestrator | Wednesday 11 March 2026 00:47:11 +0000 (0:00:01.600) 0:00:51.914 ******* 2026-03-11 00:56:54.936210 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.936214 | orchestrator | 2026-03-11 00:56:54.936219 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-11 00:56:54.936222 | orchestrator | Wednesday 11 March 2026 00:47:12 +0000 (0:00:01.374) 0:00:53.288 ******* 2026-03-11 00:56:54.936225 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.936228 | orchestrator | 2026-03-11 00:56:54.936231 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-11 00:56:54.936235 | orchestrator | Wednesday 11 March 2026 00:47:14 +0000 (0:00:01.543) 0:00:54.832 ******* 2026-03-11 00:56:54.936238 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.936241 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.936244 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.936247 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.936250 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.936253 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.936256 | orchestrator | 2026-03-11 00:56:54.936259 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-11 00:56:54.936263 | orchestrator | Wednesday 11 March 2026 00:47:15 +0000 (0:00:01.169) 0:00:56.002 ******* 2026-03-11 00:56:54.936266 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.936269 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.936272 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.936275 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.936278 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.936281 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.936284 | orchestrator | 2026-03-11 00:56:54.936288 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-11 00:56:54.936291 | orchestrator | Wednesday 11 March 2026 00:47:16 +0000 (0:00:00.777) 0:00:56.779 ******* 2026-03-11 00:56:54.936294 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.936297 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.936300 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.936303 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.936306 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.936310 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.936313 | orchestrator | 2026-03-11 00:56:54.936316 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-11 00:56:54.936319 | orchestrator | Wednesday 11 March 2026 00:47:16 +0000 (0:00:00.794) 0:00:57.574 ******* 2026-03-11 00:56:54.936322 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.936325 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.936328 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.936334 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.936337 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.936340 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.936343 | orchestrator | 2026-03-11 00:56:54.936346 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-11 00:56:54.936350 | orchestrator | Wednesday 11 March 2026 00:47:18 +0000 (0:00:01.279) 0:00:58.853 ******* 2026-03-11 00:56:54.936353 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.936356 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.936359 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.936362 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.936365 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.936382 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.936389 | orchestrator | 2026-03-11 00:56:54.936394 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-11 00:56:54.936399 | orchestrator | Wednesday 11 March 2026 00:47:19 +0000 (0:00:01.224) 0:01:00.078 ******* 2026-03-11 00:56:54.936404 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.936408 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.936413 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.936418 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.936423 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.936427 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.936432 | orchestrator | 2026-03-11 00:56:54.936436 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-11 00:56:54.936442 | orchestrator | Wednesday 11 March 2026 00:47:19 +0000 (0:00:00.516) 0:01:00.595 ******* 2026-03-11 00:56:54.936447 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.936452 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.936457 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.936462 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.936467 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.936472 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.936477 | orchestrator | 2026-03-11 00:56:54.936482 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-11 00:56:54.936487 | orchestrator | Wednesday 11 March 2026 00:47:20 +0000 (0:00:00.628) 0:01:01.223 ******* 2026-03-11 00:56:54.936490 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.936493 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.936497 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.936500 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.936504 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.936507 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.936511 | orchestrator | 2026-03-11 00:56:54.936515 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-11 00:56:54.936518 | orchestrator | Wednesday 11 March 2026 00:47:21 +0000 (0:00:01.172) 0:01:02.395 ******* 2026-03-11 00:56:54.936522 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.936525 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.936529 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.936532 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.936536 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.936539 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.936543 | orchestrator | 2026-03-11 00:56:54.936546 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-11 00:56:54.936550 | orchestrator | Wednesday 11 March 2026 00:47:22 +0000 (0:00:01.202) 0:01:03.597 ******* 2026-03-11 00:56:54.936554 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.936557 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.936561 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.936564 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.936568 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.936571 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.936575 | orchestrator | 2026-03-11 00:56:54.936581 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-11 00:56:54.936588 | orchestrator | Wednesday 11 March 2026 00:47:23 +0000 (0:00:00.657) 0:01:04.255 ******* 2026-03-11 00:56:54.936592 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.936596 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.936599 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.936602 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.936606 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.936610 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.936615 | orchestrator | 2026-03-11 00:56:54.936620 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-11 00:56:54.936628 | orchestrator | Wednesday 11 March 2026 00:47:24 +0000 (0:00:00.959) 0:01:05.214 ******* 2026-03-11 00:56:54.936634 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.936639 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.936644 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.936649 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.936654 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.936659 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.936664 | orchestrator | 2026-03-11 00:56:54.936669 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-11 00:56:54.936674 | orchestrator | Wednesday 11 March 2026 00:47:25 +0000 (0:00:00.627) 0:01:05.841 ******* 2026-03-11 00:56:54.936690 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.936696 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.936701 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.936706 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.936711 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.936717 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.936721 | orchestrator | 2026-03-11 00:56:54.936725 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-11 00:56:54.936728 | orchestrator | Wednesday 11 March 2026 00:47:26 +0000 (0:00:01.089) 0:01:06.931 ******* 2026-03-11 00:56:54.936732 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.936736 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.936739 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.936743 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.936747 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.936750 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.936754 | orchestrator | 2026-03-11 00:56:54.936757 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-11 00:56:54.936761 | orchestrator | Wednesday 11 March 2026 00:47:27 +0000 (0:00:01.161) 0:01:08.092 ******* 2026-03-11 00:56:54.936765 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.936768 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.936772 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.936776 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.936779 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.936783 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.936786 | orchestrator | 2026-03-11 00:56:54.936790 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-11 00:56:54.936793 | orchestrator | Wednesday 11 March 2026 00:47:29 +0000 (0:00:02.329) 0:01:10.422 ******* 2026-03-11 00:56:54.936797 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.936801 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.936804 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.936808 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.936833 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.936840 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.936845 | orchestrator | 2026-03-11 00:56:54.936850 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-11 00:56:54.936856 | orchestrator | Wednesday 11 March 2026 00:47:30 +0000 (0:00:00.949) 0:01:11.372 ******* 2026-03-11 00:56:54.936862 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.936868 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.936878 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.936884 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.936889 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.936892 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.936896 | orchestrator | 2026-03-11 00:56:54.936900 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-11 00:56:54.936903 | orchestrator | Wednesday 11 March 2026 00:47:31 +0000 (0:00:01.084) 0:01:12.457 ******* 2026-03-11 00:56:54.936906 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.936909 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.936912 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.936915 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.936918 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.936921 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.936924 | orchestrator | 2026-03-11 00:56:54.936927 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-11 00:56:54.936930 | orchestrator | Wednesday 11 March 2026 00:47:33 +0000 (0:00:01.315) 0:01:13.772 ******* 2026-03-11 00:56:54.936933 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.936936 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.936940 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.936943 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.936946 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.936949 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.936952 | orchestrator | 2026-03-11 00:56:54.936955 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-11 00:56:54.936958 | orchestrator | Wednesday 11 March 2026 00:47:35 +0000 (0:00:02.282) 0:01:16.055 ******* 2026-03-11 00:56:54.936961 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.936964 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.936967 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.936970 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.936973 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.936976 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.936979 | orchestrator | 2026-03-11 00:56:54.936983 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-11 00:56:54.936986 | orchestrator | Wednesday 11 March 2026 00:47:37 +0000 (0:00:02.326) 0:01:18.382 ******* 2026-03-11 00:56:54.936989 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.936992 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.936995 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.936998 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.937001 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.937007 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.937010 | orchestrator | 2026-03-11 00:56:54.937013 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-11 00:56:54.937016 | orchestrator | Wednesday 11 March 2026 00:47:40 +0000 (0:00:02.481) 0:01:20.864 ******* 2026-03-11 00:56:54.937019 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.937023 | orchestrator | 2026-03-11 00:56:54.937026 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-11 00:56:54.937029 | orchestrator | Wednesday 11 March 2026 00:47:41 +0000 (0:00:01.323) 0:01:22.188 ******* 2026-03-11 00:56:54.937032 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937035 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937038 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937041 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937044 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937047 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937050 | orchestrator | 2026-03-11 00:56:54.937053 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-11 00:56:54.937056 | orchestrator | Wednesday 11 March 2026 00:47:42 +0000 (0:00:01.025) 0:01:23.213 ******* 2026-03-11 00:56:54.937062 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937065 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937068 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937071 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937074 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937077 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937080 | orchestrator | 2026-03-11 00:56:54.937083 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-11 00:56:54.937086 | orchestrator | Wednesday 11 March 2026 00:47:44 +0000 (0:00:01.771) 0:01:24.984 ******* 2026-03-11 00:56:54.937089 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-11 00:56:54.937092 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-11 00:56:54.937095 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-11 00:56:54.937098 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-11 00:56:54.937101 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-11 00:56:54.937105 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-11 00:56:54.937108 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-11 00:56:54.937111 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-11 00:56:54.937114 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-11 00:56:54.937117 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-11 00:56:54.937133 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-11 00:56:54.937136 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-11 00:56:54.937139 | orchestrator | 2026-03-11 00:56:54.937142 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-11 00:56:54.937146 | orchestrator | Wednesday 11 March 2026 00:47:45 +0000 (0:00:01.657) 0:01:26.642 ******* 2026-03-11 00:56:54.937149 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.937152 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.937156 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.937161 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.937166 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.937171 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.937176 | orchestrator | 2026-03-11 00:56:54.937182 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-11 00:56:54.937190 | orchestrator | Wednesday 11 March 2026 00:47:47 +0000 (0:00:01.573) 0:01:28.216 ******* 2026-03-11 00:56:54.937195 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937200 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937205 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937210 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937215 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937219 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937224 | orchestrator | 2026-03-11 00:56:54.937229 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-11 00:56:54.937234 | orchestrator | Wednesday 11 March 2026 00:47:48 +0000 (0:00:00.624) 0:01:28.840 ******* 2026-03-11 00:56:54.937238 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937243 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937248 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937253 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937258 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937263 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937273 | orchestrator | 2026-03-11 00:56:54.937278 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-11 00:56:54.937284 | orchestrator | Wednesday 11 March 2026 00:47:49 +0000 (0:00:00.953) 0:01:29.794 ******* 2026-03-11 00:56:54.937290 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937295 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937300 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937306 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937311 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937316 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937321 | orchestrator | 2026-03-11 00:56:54.937327 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-11 00:56:54.937332 | orchestrator | Wednesday 11 March 2026 00:47:49 +0000 (0:00:00.613) 0:01:30.407 ******* 2026-03-11 00:56:54.937341 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.937347 | orchestrator | 2026-03-11 00:56:54.937353 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-11 00:56:54.937358 | orchestrator | Wednesday 11 March 2026 00:47:50 +0000 (0:00:01.164) 0:01:31.571 ******* 2026-03-11 00:56:54.937364 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.937370 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.937375 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.937380 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.937385 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.937391 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.937396 | orchestrator | 2026-03-11 00:56:54.937401 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-11 00:56:54.937406 | orchestrator | Wednesday 11 March 2026 00:48:42 +0000 (0:00:52.044) 0:02:23.615 ******* 2026-03-11 00:56:54.937412 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-11 00:56:54.937418 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-11 00:56:54.937423 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-11 00:56:54.937429 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937433 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-11 00:56:54.937436 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-11 00:56:54.937439 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-11 00:56:54.937442 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937446 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-11 00:56:54.937449 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-11 00:56:54.937452 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-11 00:56:54.937455 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937458 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-11 00:56:54.937461 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-11 00:56:54.937465 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-11 00:56:54.937468 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937471 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-11 00:56:54.937474 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-11 00:56:54.937477 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-11 00:56:54.937480 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937503 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-11 00:56:54.937510 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-11 00:56:54.937514 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-11 00:56:54.937517 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937520 | orchestrator | 2026-03-11 00:56:54.937523 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-11 00:56:54.937526 | orchestrator | Wednesday 11 March 2026 00:48:43 +0000 (0:00:00.577) 0:02:24.193 ******* 2026-03-11 00:56:54.937529 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937532 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937536 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937539 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937542 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937545 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937548 | orchestrator | 2026-03-11 00:56:54.937551 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-11 00:56:54.937555 | orchestrator | Wednesday 11 March 2026 00:48:44 +0000 (0:00:00.707) 0:02:24.901 ******* 2026-03-11 00:56:54.937558 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937561 | orchestrator | 2026-03-11 00:56:54.937564 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-11 00:56:54.937567 | orchestrator | Wednesday 11 March 2026 00:48:44 +0000 (0:00:00.149) 0:02:25.050 ******* 2026-03-11 00:56:54.937570 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937573 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937576 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937579 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937582 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937586 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937589 | orchestrator | 2026-03-11 00:56:54.937592 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-11 00:56:54.937595 | orchestrator | Wednesday 11 March 2026 00:48:44 +0000 (0:00:00.507) 0:02:25.558 ******* 2026-03-11 00:56:54.937598 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937601 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937604 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937608 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937611 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937614 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937617 | orchestrator | 2026-03-11 00:56:54.937620 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-11 00:56:54.937623 | orchestrator | Wednesday 11 March 2026 00:48:45 +0000 (0:00:00.701) 0:02:26.259 ******* 2026-03-11 00:56:54.937626 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937631 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937636 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937641 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937650 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937655 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937661 | orchestrator | 2026-03-11 00:56:54.937666 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-11 00:56:54.937671 | orchestrator | Wednesday 11 March 2026 00:48:46 +0000 (0:00:00.513) 0:02:26.773 ******* 2026-03-11 00:56:54.937677 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.937697 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.937701 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.937713 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.937716 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.937719 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.937723 | orchestrator | 2026-03-11 00:56:54.937726 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-11 00:56:54.937729 | orchestrator | Wednesday 11 March 2026 00:48:48 +0000 (0:00:02.211) 0:02:28.985 ******* 2026-03-11 00:56:54.937739 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.937742 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.937745 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.937748 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.937751 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.937755 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.937758 | orchestrator | 2026-03-11 00:56:54.937761 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-11 00:56:54.937765 | orchestrator | Wednesday 11 March 2026 00:48:48 +0000 (0:00:00.655) 0:02:29.641 ******* 2026-03-11 00:56:54.937769 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.937772 | orchestrator | 2026-03-11 00:56:54.937776 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-11 00:56:54.937779 | orchestrator | Wednesday 11 March 2026 00:48:49 +0000 (0:00:00.945) 0:02:30.586 ******* 2026-03-11 00:56:54.937782 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937785 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937788 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937791 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937794 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937798 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937801 | orchestrator | 2026-03-11 00:56:54.937805 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-11 00:56:54.937810 | orchestrator | Wednesday 11 March 2026 00:48:50 +0000 (0:00:00.704) 0:02:31.291 ******* 2026-03-11 00:56:54.937815 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937821 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937826 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937831 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937837 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937840 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937844 | orchestrator | 2026-03-11 00:56:54.937847 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-11 00:56:54.937850 | orchestrator | Wednesday 11 March 2026 00:48:51 +0000 (0:00:00.560) 0:02:31.851 ******* 2026-03-11 00:56:54.937853 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937856 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937874 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937877 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937880 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937884 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937887 | orchestrator | 2026-03-11 00:56:54.937890 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-11 00:56:54.937893 | orchestrator | Wednesday 11 March 2026 00:48:51 +0000 (0:00:00.708) 0:02:32.560 ******* 2026-03-11 00:56:54.937896 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937899 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937902 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937905 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937908 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937911 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937915 | orchestrator | 2026-03-11 00:56:54.937918 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-11 00:56:54.937921 | orchestrator | Wednesday 11 March 2026 00:48:52 +0000 (0:00:00.531) 0:02:33.092 ******* 2026-03-11 00:56:54.937924 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937927 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937930 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937933 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937936 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937941 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937946 | orchestrator | 2026-03-11 00:56:54.937958 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-11 00:56:54.937963 | orchestrator | Wednesday 11 March 2026 00:48:53 +0000 (0:00:00.762) 0:02:33.854 ******* 2026-03-11 00:56:54.937968 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.937972 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.937977 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.937981 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.937986 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.937994 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.937999 | orchestrator | 2026-03-11 00:56:54.938004 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-11 00:56:54.938010 | orchestrator | Wednesday 11 March 2026 00:48:53 +0000 (0:00:00.664) 0:02:34.519 ******* 2026-03-11 00:56:54.938043 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.938048 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.938053 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.938058 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.938063 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.938068 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.938073 | orchestrator | 2026-03-11 00:56:54.938078 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-11 00:56:54.938084 | orchestrator | Wednesday 11 March 2026 00:48:54 +0000 (0:00:00.828) 0:02:35.347 ******* 2026-03-11 00:56:54.938090 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.938094 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.938102 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.938107 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.938113 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.938117 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.938122 | orchestrator | 2026-03-11 00:56:54.938127 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-11 00:56:54.938133 | orchestrator | Wednesday 11 March 2026 00:48:55 +0000 (0:00:00.636) 0:02:35.984 ******* 2026-03-11 00:56:54.938138 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.938143 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.938149 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.938153 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.938158 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.938164 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.938169 | orchestrator | 2026-03-11 00:56:54.938174 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-11 00:56:54.938179 | orchestrator | Wednesday 11 March 2026 00:48:56 +0000 (0:00:01.309) 0:02:37.294 ******* 2026-03-11 00:56:54.938184 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.938189 | orchestrator | 2026-03-11 00:56:54.938195 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-11 00:56:54.938199 | orchestrator | Wednesday 11 March 2026 00:48:57 +0000 (0:00:01.165) 0:02:38.460 ******* 2026-03-11 00:56:54.938204 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-11 00:56:54.938209 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-11 00:56:54.938214 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-11 00:56:54.938219 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-11 00:56:54.938224 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-11 00:56:54.938229 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-11 00:56:54.938235 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-11 00:56:54.938239 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-11 00:56:54.938244 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-11 00:56:54.938249 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-11 00:56:54.938260 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-11 00:56:54.938265 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-11 00:56:54.938270 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-11 00:56:54.938275 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-11 00:56:54.938280 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-11 00:56:54.938286 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-11 00:56:54.938291 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-11 00:56:54.938296 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-11 00:56:54.938326 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-11 00:56:54.938332 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-11 00:56:54.938337 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-11 00:56:54.938342 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-11 00:56:54.938348 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-11 00:56:54.938353 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-11 00:56:54.938358 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-11 00:56:54.938364 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-11 00:56:54.938369 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-11 00:56:54.938374 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-11 00:56:54.938380 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-11 00:56:54.938385 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-11 00:56:54.938390 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-11 00:56:54.938395 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-11 00:56:54.938400 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-11 00:56:54.938406 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-11 00:56:54.938411 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-11 00:56:54.938416 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-11 00:56:54.938421 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-11 00:56:54.938426 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-11 00:56:54.938432 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-11 00:56:54.938437 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-11 00:56:54.938442 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-11 00:56:54.938447 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-11 00:56:54.938452 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-11 00:56:54.938457 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-11 00:56:54.938462 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-11 00:56:54.938467 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-11 00:56:54.938472 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-11 00:56:54.938481 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-11 00:56:54.938486 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-11 00:56:54.938491 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-11 00:56:54.938496 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-11 00:56:54.938501 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-11 00:56:54.938506 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-11 00:56:54.938516 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-11 00:56:54.938521 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-11 00:56:54.938527 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-11 00:56:54.938532 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-11 00:56:54.938537 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-11 00:56:54.938542 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-11 00:56:54.938547 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-11 00:56:54.938552 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-11 00:56:54.938557 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-11 00:56:54.938562 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-11 00:56:54.938567 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-11 00:56:54.938573 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-11 00:56:54.938578 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-11 00:56:54.938583 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-11 00:56:54.938588 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-11 00:56:54.938593 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-11 00:56:54.938599 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-11 00:56:54.938604 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-11 00:56:54.938609 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-11 00:56:54.938614 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-11 00:56:54.938620 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-11 00:56:54.938625 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-11 00:56:54.938630 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-11 00:56:54.938654 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-11 00:56:54.938661 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-11 00:56:54.938666 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-11 00:56:54.938671 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-11 00:56:54.938676 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-11 00:56:54.938707 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-11 00:56:54.938713 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-11 00:56:54.938718 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-11 00:56:54.938723 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-11 00:56:54.938728 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-11 00:56:54.938734 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-11 00:56:54.938739 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-11 00:56:54.938744 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-11 00:56:54.938749 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-11 00:56:54.938754 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-11 00:56:54.938760 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-11 00:56:54.938765 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-11 00:56:54.938774 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-11 00:56:54.938779 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-11 00:56:54.938785 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-11 00:56:54.938790 | orchestrator | 2026-03-11 00:56:54.938795 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-11 00:56:54.938800 | orchestrator | Wednesday 11 March 2026 00:49:05 +0000 (0:00:07.319) 0:02:45.779 ******* 2026-03-11 00:56:54.938806 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.938811 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.938816 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.938821 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.938827 | orchestrator | 2026-03-11 00:56:54.938832 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-11 00:56:54.938838 | orchestrator | Wednesday 11 March 2026 00:49:05 +0000 (0:00:00.862) 0:02:46.642 ******* 2026-03-11 00:56:54.938847 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:54.938853 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:54.938858 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:54.938864 | orchestrator | 2026-03-11 00:56:54.938869 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-11 00:56:54.938874 | orchestrator | Wednesday 11 March 2026 00:49:06 +0000 (0:00:00.910) 0:02:47.552 ******* 2026-03-11 00:56:54.938879 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:54.938885 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:54.938891 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:54.938896 | orchestrator | 2026-03-11 00:56:54.938901 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-11 00:56:54.938906 | orchestrator | Wednesday 11 March 2026 00:49:08 +0000 (0:00:01.557) 0:02:49.110 ******* 2026-03-11 00:56:54.938912 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.938916 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.938919 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.938922 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.938925 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.938928 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.938931 | orchestrator | 2026-03-11 00:56:54.938934 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-11 00:56:54.938938 | orchestrator | Wednesday 11 March 2026 00:49:09 +0000 (0:00:00.692) 0:02:49.803 ******* 2026-03-11 00:56:54.938941 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.938944 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.938947 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.938950 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.938953 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.938956 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.938959 | orchestrator | 2026-03-11 00:56:54.938962 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-11 00:56:54.938966 | orchestrator | Wednesday 11 March 2026 00:49:09 +0000 (0:00:00.791) 0:02:50.595 ******* 2026-03-11 00:56:54.938969 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.938972 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.938977 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.938981 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.938984 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.938987 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.938990 | orchestrator | 2026-03-11 00:56:54.939008 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-11 00:56:54.939012 | orchestrator | Wednesday 11 March 2026 00:49:10 +0000 (0:00:00.652) 0:02:51.247 ******* 2026-03-11 00:56:54.939016 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939019 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.939022 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.939025 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939028 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939031 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939034 | orchestrator | 2026-03-11 00:56:54.939039 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-11 00:56:54.939045 | orchestrator | Wednesday 11 March 2026 00:49:11 +0000 (0:00:00.774) 0:02:52.022 ******* 2026-03-11 00:56:54.939052 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939058 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.939063 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.939068 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939073 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939078 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939083 | orchestrator | 2026-03-11 00:56:54.939088 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-11 00:56:54.939093 | orchestrator | Wednesday 11 March 2026 00:49:11 +0000 (0:00:00.596) 0:02:52.619 ******* 2026-03-11 00:56:54.939098 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939102 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.939107 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939112 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.939117 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939123 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939128 | orchestrator | 2026-03-11 00:56:54.939134 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-11 00:56:54.939139 | orchestrator | Wednesday 11 March 2026 00:49:12 +0000 (0:00:00.788) 0:02:53.407 ******* 2026-03-11 00:56:54.939145 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939150 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.939155 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.939160 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939165 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939170 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939175 | orchestrator | 2026-03-11 00:56:54.939180 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-11 00:56:54.939186 | orchestrator | Wednesday 11 March 2026 00:49:13 +0000 (0:00:00.595) 0:02:54.003 ******* 2026-03-11 00:56:54.939191 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939197 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.939202 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.939207 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939213 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939222 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939225 | orchestrator | 2026-03-11 00:56:54.939228 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-11 00:56:54.939232 | orchestrator | Wednesday 11 March 2026 00:49:14 +0000 (0:00:01.062) 0:02:55.066 ******* 2026-03-11 00:56:54.939235 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939238 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939241 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939244 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.939250 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.939253 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.939257 | orchestrator | 2026-03-11 00:56:54.939262 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-11 00:56:54.939267 | orchestrator | Wednesday 11 March 2026 00:49:17 +0000 (0:00:03.367) 0:02:58.433 ******* 2026-03-11 00:56:54.939273 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.939277 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.939282 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939287 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.939293 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939296 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939299 | orchestrator | 2026-03-11 00:56:54.939303 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-11 00:56:54.939306 | orchestrator | Wednesday 11 March 2026 00:49:18 +0000 (0:00:01.035) 0:02:59.469 ******* 2026-03-11 00:56:54.939309 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.939312 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.939315 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.939318 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939321 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939325 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939328 | orchestrator | 2026-03-11 00:56:54.939331 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-11 00:56:54.939334 | orchestrator | Wednesday 11 March 2026 00:49:19 +0000 (0:00:00.616) 0:03:00.085 ******* 2026-03-11 00:56:54.939337 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939340 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.939343 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.939348 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939353 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939358 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939365 | orchestrator | 2026-03-11 00:56:54.939372 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-11 00:56:54.939377 | orchestrator | Wednesday 11 March 2026 00:49:20 +0000 (0:00:00.785) 0:03:00.871 ******* 2026-03-11 00:56:54.939382 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:54.939387 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:54.939392 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:54.939397 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939425 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939433 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939438 | orchestrator | 2026-03-11 00:56:54.939443 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-11 00:56:54.939448 | orchestrator | Wednesday 11 March 2026 00:49:20 +0000 (0:00:00.675) 0:03:01.546 ******* 2026-03-11 00:56:54.939455 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-11 00:56:54.939462 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-11 00:56:54.939467 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-11 00:56:54.939478 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-11 00:56:54.939483 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939490 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-11 00:56:54.939501 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-11 00:56:54.939506 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.939511 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.939516 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939521 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939527 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939532 | orchestrator | 2026-03-11 00:56:54.939537 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-11 00:56:54.939540 | orchestrator | Wednesday 11 March 2026 00:49:21 +0000 (0:00:00.816) 0:03:02.363 ******* 2026-03-11 00:56:54.939543 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939546 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.939549 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.939552 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939555 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939559 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939562 | orchestrator | 2026-03-11 00:56:54.939565 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-11 00:56:54.939568 | orchestrator | Wednesday 11 March 2026 00:49:22 +0000 (0:00:00.607) 0:03:02.970 ******* 2026-03-11 00:56:54.939571 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939574 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.939577 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.939580 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939583 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939586 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939589 | orchestrator | 2026-03-11 00:56:54.939593 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-11 00:56:54.939596 | orchestrator | Wednesday 11 March 2026 00:49:23 +0000 (0:00:00.808) 0:03:03.779 ******* 2026-03-11 00:56:54.939599 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939602 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.939605 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.939608 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939611 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939614 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939617 | orchestrator | 2026-03-11 00:56:54.939621 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-11 00:56:54.939624 | orchestrator | Wednesday 11 March 2026 00:49:23 +0000 (0:00:00.595) 0:03:04.374 ******* 2026-03-11 00:56:54.939627 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939630 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.939633 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.939639 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939642 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939645 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939649 | orchestrator | 2026-03-11 00:56:54.939652 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-11 00:56:54.939670 | orchestrator | Wednesday 11 March 2026 00:49:24 +0000 (0:00:00.707) 0:03:05.082 ******* 2026-03-11 00:56:54.939674 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939677 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.939701 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.939705 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939708 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939711 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939714 | orchestrator | 2026-03-11 00:56:54.939717 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-11 00:56:54.939720 | orchestrator | Wednesday 11 March 2026 00:49:24 +0000 (0:00:00.565) 0:03:05.648 ******* 2026-03-11 00:56:54.939724 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.939727 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939730 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.939733 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939736 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.939739 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939742 | orchestrator | 2026-03-11 00:56:54.939745 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-11 00:56:54.939748 | orchestrator | Wednesday 11 March 2026 00:49:26 +0000 (0:00:01.150) 0:03:06.798 ******* 2026-03-11 00:56:54.939751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:54.939755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:54.939758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:54.939761 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939764 | orchestrator | 2026-03-11 00:56:54.939767 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-11 00:56:54.939770 | orchestrator | Wednesday 11 March 2026 00:49:26 +0000 (0:00:00.382) 0:03:07.181 ******* 2026-03-11 00:56:54.939773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:54.939776 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:54.939780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:54.939783 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939786 | orchestrator | 2026-03-11 00:56:54.939789 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-11 00:56:54.939792 | orchestrator | Wednesday 11 March 2026 00:49:26 +0000 (0:00:00.365) 0:03:07.547 ******* 2026-03-11 00:56:54.939795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:54.939798 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:54.939801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:54.939805 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939808 | orchestrator | 2026-03-11 00:56:54.939811 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-11 00:56:54.939816 | orchestrator | Wednesday 11 March 2026 00:49:27 +0000 (0:00:00.311) 0:03:07.858 ******* 2026-03-11 00:56:54.939819 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.939822 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.939826 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.939829 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939832 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939835 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939838 | orchestrator | 2026-03-11 00:56:54.939841 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-11 00:56:54.939845 | orchestrator | Wednesday 11 March 2026 00:49:27 +0000 (0:00:00.551) 0:03:08.410 ******* 2026-03-11 00:56:54.939852 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-11 00:56:54.939856 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-11 00:56:54.939859 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.939862 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-11 00:56:54.939865 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.939868 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-11 00:56:54.939871 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.939874 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-11 00:56:54.939877 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-11 00:56:54.939881 | orchestrator | 2026-03-11 00:56:54.939884 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-11 00:56:54.939887 | orchestrator | Wednesday 11 March 2026 00:49:29 +0000 (0:00:02.063) 0:03:10.474 ******* 2026-03-11 00:56:54.939890 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.939893 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.939896 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.939900 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.939903 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.939906 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.939909 | orchestrator | 2026-03-11 00:56:54.939912 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-11 00:56:54.939915 | orchestrator | Wednesday 11 March 2026 00:49:32 +0000 (0:00:03.023) 0:03:13.498 ******* 2026-03-11 00:56:54.939918 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.939923 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.939928 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.939933 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.939938 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.939943 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.939948 | orchestrator | 2026-03-11 00:56:54.939953 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-11 00:56:54.939958 | orchestrator | Wednesday 11 March 2026 00:49:33 +0000 (0:00:01.051) 0:03:14.549 ******* 2026-03-11 00:56:54.939964 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.939969 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.939974 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.939980 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.939985 | orchestrator | 2026-03-11 00:56:54.939991 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-11 00:56:54.940008 | orchestrator | Wednesday 11 March 2026 00:49:34 +0000 (0:00:00.923) 0:03:15.472 ******* 2026-03-11 00:56:54.940012 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.940015 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.940019 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.940022 | orchestrator | 2026-03-11 00:56:54.940025 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-11 00:56:54.940028 | orchestrator | Wednesday 11 March 2026 00:49:35 +0000 (0:00:00.317) 0:03:15.790 ******* 2026-03-11 00:56:54.940031 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.940034 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.940037 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.940040 | orchestrator | 2026-03-11 00:56:54.940044 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-11 00:56:54.940047 | orchestrator | Wednesday 11 March 2026 00:49:36 +0000 (0:00:01.323) 0:03:17.113 ******* 2026-03-11 00:56:54.940050 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-11 00:56:54.940053 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-11 00:56:54.940056 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-11 00:56:54.940059 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.940065 | orchestrator | 2026-03-11 00:56:54.940069 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-11 00:56:54.940072 | orchestrator | Wednesday 11 March 2026 00:49:37 +0000 (0:00:00.994) 0:03:18.107 ******* 2026-03-11 00:56:54.940075 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.940078 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.940083 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.940089 | orchestrator | 2026-03-11 00:56:54.940094 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-11 00:56:54.940099 | orchestrator | Wednesday 11 March 2026 00:49:37 +0000 (0:00:00.384) 0:03:18.492 ******* 2026-03-11 00:56:54.940105 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.940110 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.940115 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.940121 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.940126 | orchestrator | 2026-03-11 00:56:54.940131 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-11 00:56:54.940137 | orchestrator | Wednesday 11 March 2026 00:49:38 +0000 (0:00:00.926) 0:03:19.418 ******* 2026-03-11 00:56:54.940141 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:54.940144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:54.940149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:54.940154 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940159 | orchestrator | 2026-03-11 00:56:54.940164 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-11 00:56:54.940172 | orchestrator | Wednesday 11 March 2026 00:49:39 +0000 (0:00:00.334) 0:03:19.753 ******* 2026-03-11 00:56:54.940178 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940183 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.940188 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.940193 | orchestrator | 2026-03-11 00:56:54.940198 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-11 00:56:54.940203 | orchestrator | Wednesday 11 March 2026 00:49:39 +0000 (0:00:00.275) 0:03:20.029 ******* 2026-03-11 00:56:54.940209 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940214 | orchestrator | 2026-03-11 00:56:54.940219 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-11 00:56:54.940224 | orchestrator | Wednesday 11 March 2026 00:49:39 +0000 (0:00:00.197) 0:03:20.226 ******* 2026-03-11 00:56:54.940228 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940231 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.940234 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.940237 | orchestrator | 2026-03-11 00:56:54.940240 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-11 00:56:54.940243 | orchestrator | Wednesday 11 March 2026 00:49:39 +0000 (0:00:00.307) 0:03:20.533 ******* 2026-03-11 00:56:54.940246 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940249 | orchestrator | 2026-03-11 00:56:54.940252 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-11 00:56:54.940256 | orchestrator | Wednesday 11 March 2026 00:49:40 +0000 (0:00:00.220) 0:03:20.754 ******* 2026-03-11 00:56:54.940259 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940262 | orchestrator | 2026-03-11 00:56:54.940265 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-11 00:56:54.940268 | orchestrator | Wednesday 11 March 2026 00:49:40 +0000 (0:00:00.204) 0:03:20.958 ******* 2026-03-11 00:56:54.940271 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940274 | orchestrator | 2026-03-11 00:56:54.940277 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-11 00:56:54.940280 | orchestrator | Wednesday 11 March 2026 00:49:40 +0000 (0:00:00.102) 0:03:21.060 ******* 2026-03-11 00:56:54.940284 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940289 | orchestrator | 2026-03-11 00:56:54.940293 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-11 00:56:54.940296 | orchestrator | Wednesday 11 March 2026 00:49:40 +0000 (0:00:00.550) 0:03:21.611 ******* 2026-03-11 00:56:54.940299 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940302 | orchestrator | 2026-03-11 00:56:54.940305 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-11 00:56:54.940308 | orchestrator | Wednesday 11 March 2026 00:49:41 +0000 (0:00:00.203) 0:03:21.814 ******* 2026-03-11 00:56:54.940311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:54.940315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:54.940318 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:54.940321 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940324 | orchestrator | 2026-03-11 00:56:54.940327 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-11 00:56:54.940343 | orchestrator | Wednesday 11 March 2026 00:49:41 +0000 (0:00:00.366) 0:03:22.180 ******* 2026-03-11 00:56:54.940348 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940354 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.940359 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.940364 | orchestrator | 2026-03-11 00:56:54.940370 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-11 00:56:54.940375 | orchestrator | Wednesday 11 March 2026 00:49:41 +0000 (0:00:00.298) 0:03:22.479 ******* 2026-03-11 00:56:54.940381 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940386 | orchestrator | 2026-03-11 00:56:54.940391 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-11 00:56:54.940397 | orchestrator | Wednesday 11 March 2026 00:49:42 +0000 (0:00:00.194) 0:03:22.674 ******* 2026-03-11 00:56:54.940402 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940408 | orchestrator | 2026-03-11 00:56:54.940413 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-11 00:56:54.940419 | orchestrator | Wednesday 11 March 2026 00:49:42 +0000 (0:00:00.166) 0:03:22.840 ******* 2026-03-11 00:56:54.940424 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.940430 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.940435 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.940440 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.940445 | orchestrator | 2026-03-11 00:56:54.940451 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-11 00:56:54.940456 | orchestrator | Wednesday 11 March 2026 00:49:42 +0000 (0:00:00.787) 0:03:23.627 ******* 2026-03-11 00:56:54.940463 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.940466 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.940469 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.940472 | orchestrator | 2026-03-11 00:56:54.940475 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-11 00:56:54.940478 | orchestrator | Wednesday 11 March 2026 00:49:43 +0000 (0:00:00.289) 0:03:23.917 ******* 2026-03-11 00:56:54.940481 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.940485 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.940488 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.940491 | orchestrator | 2026-03-11 00:56:54.940494 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-11 00:56:54.940497 | orchestrator | Wednesday 11 March 2026 00:49:44 +0000 (0:00:01.191) 0:03:25.109 ******* 2026-03-11 00:56:54.940500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:54.940503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:54.940506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:54.940509 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940515 | orchestrator | 2026-03-11 00:56:54.940521 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-11 00:56:54.940524 | orchestrator | Wednesday 11 March 2026 00:49:45 +0000 (0:00:00.864) 0:03:25.973 ******* 2026-03-11 00:56:54.940527 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.940530 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.940533 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.940536 | orchestrator | 2026-03-11 00:56:54.940539 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-11 00:56:54.940542 | orchestrator | Wednesday 11 March 2026 00:49:45 +0000 (0:00:00.495) 0:03:26.469 ******* 2026-03-11 00:56:54.940546 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.940549 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.940552 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.940555 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.940561 | orchestrator | 2026-03-11 00:56:54.940566 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-11 00:56:54.940571 | orchestrator | Wednesday 11 March 2026 00:49:46 +0000 (0:00:00.691) 0:03:27.160 ******* 2026-03-11 00:56:54.940576 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.940582 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.940587 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.940592 | orchestrator | 2026-03-11 00:56:54.940598 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-11 00:56:54.940603 | orchestrator | Wednesday 11 March 2026 00:49:46 +0000 (0:00:00.440) 0:03:27.601 ******* 2026-03-11 00:56:54.940608 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.940614 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.940619 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.940624 | orchestrator | 2026-03-11 00:56:54.940628 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-11 00:56:54.940631 | orchestrator | Wednesday 11 March 2026 00:49:47 +0000 (0:00:01.018) 0:03:28.619 ******* 2026-03-11 00:56:54.940634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:54.940638 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:54.940641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:54.940644 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940647 | orchestrator | 2026-03-11 00:56:54.940650 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-11 00:56:54.940653 | orchestrator | Wednesday 11 March 2026 00:49:48 +0000 (0:00:00.605) 0:03:29.224 ******* 2026-03-11 00:56:54.940657 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.940660 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.940663 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.940666 | orchestrator | 2026-03-11 00:56:54.940669 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-11 00:56:54.940672 | orchestrator | Wednesday 11 March 2026 00:49:48 +0000 (0:00:00.269) 0:03:29.494 ******* 2026-03-11 00:56:54.940675 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940688 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.940693 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.940697 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.940700 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.940715 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.940719 | orchestrator | 2026-03-11 00:56:54.940722 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-11 00:56:54.940725 | orchestrator | Wednesday 11 March 2026 00:49:49 +0000 (0:00:00.693) 0:03:30.187 ******* 2026-03-11 00:56:54.940728 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.940731 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.940735 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.940738 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.940744 | orchestrator | 2026-03-11 00:56:54.940747 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-11 00:56:54.940750 | orchestrator | Wednesday 11 March 2026 00:49:50 +0000 (0:00:00.741) 0:03:30.929 ******* 2026-03-11 00:56:54.940753 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.940756 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.940759 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.940762 | orchestrator | 2026-03-11 00:56:54.940765 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-11 00:56:54.940768 | orchestrator | Wednesday 11 March 2026 00:49:50 +0000 (0:00:00.453) 0:03:31.383 ******* 2026-03-11 00:56:54.940771 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.940774 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.940778 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.940781 | orchestrator | 2026-03-11 00:56:54.940784 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-11 00:56:54.940787 | orchestrator | Wednesday 11 March 2026 00:49:52 +0000 (0:00:01.437) 0:03:32.820 ******* 2026-03-11 00:56:54.940790 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-11 00:56:54.940793 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-11 00:56:54.940796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-11 00:56:54.940799 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.940802 | orchestrator | 2026-03-11 00:56:54.940806 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-11 00:56:54.940812 | orchestrator | Wednesday 11 March 2026 00:49:52 +0000 (0:00:00.501) 0:03:33.322 ******* 2026-03-11 00:56:54.940817 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.940822 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.940827 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.940832 | orchestrator | 2026-03-11 00:56:54.940837 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-11 00:56:54.940843 | orchestrator | 2026-03-11 00:56:54.940848 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-11 00:56:54.940854 | orchestrator | Wednesday 11 March 2026 00:49:53 +0000 (0:00:00.482) 0:03:33.805 ******* 2026-03-11 00:56:54.940863 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.940868 | orchestrator | 2026-03-11 00:56:54.940874 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-11 00:56:54.940879 | orchestrator | Wednesday 11 March 2026 00:49:53 +0000 (0:00:00.720) 0:03:34.525 ******* 2026-03-11 00:56:54.940885 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.940891 | orchestrator | 2026-03-11 00:56:54.940896 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-11 00:56:54.940901 | orchestrator | Wednesday 11 March 2026 00:49:54 +0000 (0:00:00.423) 0:03:34.949 ******* 2026-03-11 00:56:54.940907 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.940912 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.940915 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.940918 | orchestrator | 2026-03-11 00:56:54.940921 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-11 00:56:54.940924 | orchestrator | Wednesday 11 March 2026 00:49:55 +0000 (0:00:01.019) 0:03:35.968 ******* 2026-03-11 00:56:54.940927 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.940930 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.940933 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.940936 | orchestrator | 2026-03-11 00:56:54.940939 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-11 00:56:54.940942 | orchestrator | Wednesday 11 March 2026 00:49:55 +0000 (0:00:00.280) 0:03:36.248 ******* 2026-03-11 00:56:54.940948 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.940952 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.940955 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.940958 | orchestrator | 2026-03-11 00:56:54.940961 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-11 00:56:54.940964 | orchestrator | Wednesday 11 March 2026 00:49:55 +0000 (0:00:00.299) 0:03:36.548 ******* 2026-03-11 00:56:54.940967 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.940970 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.940973 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.940976 | orchestrator | 2026-03-11 00:56:54.940979 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-11 00:56:54.940983 | orchestrator | Wednesday 11 March 2026 00:49:56 +0000 (0:00:00.300) 0:03:36.848 ******* 2026-03-11 00:56:54.940988 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.940993 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.940998 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.941003 | orchestrator | 2026-03-11 00:56:54.941009 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-11 00:56:54.941014 | orchestrator | Wednesday 11 March 2026 00:49:56 +0000 (0:00:00.775) 0:03:37.624 ******* 2026-03-11 00:56:54.941019 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.941024 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.941029 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.941035 | orchestrator | 2026-03-11 00:56:54.941039 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-11 00:56:54.941042 | orchestrator | Wednesday 11 March 2026 00:49:57 +0000 (0:00:00.233) 0:03:37.858 ******* 2026-03-11 00:56:54.941059 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.941063 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.941066 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.941070 | orchestrator | 2026-03-11 00:56:54.941075 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-11 00:56:54.941080 | orchestrator | Wednesday 11 March 2026 00:49:57 +0000 (0:00:00.268) 0:03:38.126 ******* 2026-03-11 00:56:54.941085 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.941090 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.941095 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.941100 | orchestrator | 2026-03-11 00:56:54.941106 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-11 00:56:54.941111 | orchestrator | Wednesday 11 March 2026 00:49:58 +0000 (0:00:00.626) 0:03:38.753 ******* 2026-03-11 00:56:54.941116 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.941121 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.941126 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.941132 | orchestrator | 2026-03-11 00:56:54.941137 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-11 00:56:54.941142 | orchestrator | Wednesday 11 March 2026 00:49:58 +0000 (0:00:00.802) 0:03:39.555 ******* 2026-03-11 00:56:54.941148 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.941152 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.941155 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.941160 | orchestrator | 2026-03-11 00:56:54.941164 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-11 00:56:54.941173 | orchestrator | Wednesday 11 March 2026 00:49:59 +0000 (0:00:00.266) 0:03:39.821 ******* 2026-03-11 00:56:54.941178 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.941183 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.941188 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.941194 | orchestrator | 2026-03-11 00:56:54.941197 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-11 00:56:54.941200 | orchestrator | Wednesday 11 March 2026 00:49:59 +0000 (0:00:00.294) 0:03:40.116 ******* 2026-03-11 00:56:54.941203 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.941211 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.941214 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.941218 | orchestrator | 2026-03-11 00:56:54.941221 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-11 00:56:54.941224 | orchestrator | Wednesday 11 March 2026 00:49:59 +0000 (0:00:00.269) 0:03:40.385 ******* 2026-03-11 00:56:54.941227 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.941230 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.941233 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.941236 | orchestrator | 2026-03-11 00:56:54.941239 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-11 00:56:54.941242 | orchestrator | Wednesday 11 March 2026 00:49:59 +0000 (0:00:00.258) 0:03:40.644 ******* 2026-03-11 00:56:54.941245 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.941251 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.941254 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.941257 | orchestrator | 2026-03-11 00:56:54.941260 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-11 00:56:54.941263 | orchestrator | Wednesday 11 March 2026 00:50:00 +0000 (0:00:00.445) 0:03:41.090 ******* 2026-03-11 00:56:54.941266 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.941269 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.941272 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.941275 | orchestrator | 2026-03-11 00:56:54.941278 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-11 00:56:54.941281 | orchestrator | Wednesday 11 March 2026 00:50:00 +0000 (0:00:00.267) 0:03:41.357 ******* 2026-03-11 00:56:54.941284 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.941287 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.941290 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.941293 | orchestrator | 2026-03-11 00:56:54.941297 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-11 00:56:54.941300 | orchestrator | Wednesday 11 March 2026 00:50:00 +0000 (0:00:00.244) 0:03:41.602 ******* 2026-03-11 00:56:54.941303 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.941306 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.941309 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.941312 | orchestrator | 2026-03-11 00:56:54.941315 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-11 00:56:54.941318 | orchestrator | Wednesday 11 March 2026 00:50:01 +0000 (0:00:00.253) 0:03:41.855 ******* 2026-03-11 00:56:54.941321 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.941324 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.941327 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.941330 | orchestrator | 2026-03-11 00:56:54.941333 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-11 00:56:54.941336 | orchestrator | Wednesday 11 March 2026 00:50:01 +0000 (0:00:00.505) 0:03:42.361 ******* 2026-03-11 00:56:54.941339 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.941342 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.941347 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.941352 | orchestrator | 2026-03-11 00:56:54.941359 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-11 00:56:54.941366 | orchestrator | Wednesday 11 March 2026 00:50:02 +0000 (0:00:00.437) 0:03:42.798 ******* 2026-03-11 00:56:54.941370 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.941375 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.941380 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.941386 | orchestrator | 2026-03-11 00:56:54.941390 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-11 00:56:54.941393 | orchestrator | Wednesday 11 March 2026 00:50:02 +0000 (0:00:00.319) 0:03:43.118 ******* 2026-03-11 00:56:54.941397 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.941400 | orchestrator | 2026-03-11 00:56:54.941407 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-11 00:56:54.941410 | orchestrator | Wednesday 11 March 2026 00:50:03 +0000 (0:00:00.631) 0:03:43.749 ******* 2026-03-11 00:56:54.941413 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.941416 | orchestrator | 2026-03-11 00:56:54.941434 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-11 00:56:54.941438 | orchestrator | Wednesday 11 March 2026 00:50:03 +0000 (0:00:00.140) 0:03:43.890 ******* 2026-03-11 00:56:54.941441 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-11 00:56:54.941444 | orchestrator | 2026-03-11 00:56:54.941447 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-11 00:56:54.941451 | orchestrator | Wednesday 11 March 2026 00:50:04 +0000 (0:00:00.989) 0:03:44.879 ******* 2026-03-11 00:56:54.941455 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.941460 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.941465 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.941470 | orchestrator | 2026-03-11 00:56:54.941479 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-11 00:56:54.941484 | orchestrator | Wednesday 11 March 2026 00:50:04 +0000 (0:00:00.320) 0:03:45.200 ******* 2026-03-11 00:56:54.941489 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.941494 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.941499 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.941505 | orchestrator | 2026-03-11 00:56:54.941510 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-11 00:56:54.941515 | orchestrator | Wednesday 11 March 2026 00:50:04 +0000 (0:00:00.347) 0:03:45.547 ******* 2026-03-11 00:56:54.941519 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.941523 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.941526 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.941529 | orchestrator | 2026-03-11 00:56:54.941532 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-11 00:56:54.941538 | orchestrator | Wednesday 11 March 2026 00:50:06 +0000 (0:00:01.367) 0:03:46.915 ******* 2026-03-11 00:56:54.941543 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.941548 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.941552 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.941557 | orchestrator | 2026-03-11 00:56:54.941562 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-11 00:56:54.941567 | orchestrator | Wednesday 11 March 2026 00:50:06 +0000 (0:00:00.740) 0:03:47.656 ******* 2026-03-11 00:56:54.941572 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.941578 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.941583 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.941586 | orchestrator | 2026-03-11 00:56:54.941589 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-11 00:56:54.941593 | orchestrator | Wednesday 11 March 2026 00:50:07 +0000 (0:00:00.553) 0:03:48.209 ******* 2026-03-11 00:56:54.941596 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.941599 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.941602 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.941605 | orchestrator | 2026-03-11 00:56:54.941608 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-11 00:56:54.941614 | orchestrator | Wednesday 11 March 2026 00:50:08 +0000 (0:00:00.745) 0:03:48.955 ******* 2026-03-11 00:56:54.941617 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.941620 | orchestrator | 2026-03-11 00:56:54.941624 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-11 00:56:54.941627 | orchestrator | Wednesday 11 March 2026 00:50:09 +0000 (0:00:01.331) 0:03:50.286 ******* 2026-03-11 00:56:54.941630 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.941633 | orchestrator | 2026-03-11 00:56:54.941636 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-11 00:56:54.941639 | orchestrator | Wednesday 11 March 2026 00:50:10 +0000 (0:00:01.070) 0:03:51.357 ******* 2026-03-11 00:56:54.941645 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 00:56:54.941649 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:54.941652 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:54.941655 | orchestrator | changed: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:56:54.941658 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:56:54.941661 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-11 00:56:54.941664 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-11 00:56:54.941668 | orchestrator | changed: [testbed-node-2 -> {{ item }}] 2026-03-11 00:56:54.941671 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:56:54.941674 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:56:54.941677 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-11 00:56:54.941704 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-03-11 00:56:54.941711 | orchestrator | 2026-03-11 00:56:54.941720 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-11 00:56:54.941725 | orchestrator | Wednesday 11 March 2026 00:50:13 +0000 (0:00:02.919) 0:03:54.277 ******* 2026-03-11 00:56:54.941729 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.941734 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.941739 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.941744 | orchestrator | 2026-03-11 00:56:54.941749 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-11 00:56:54.941753 | orchestrator | Wednesday 11 March 2026 00:50:14 +0000 (0:00:00.948) 0:03:55.225 ******* 2026-03-11 00:56:54.941758 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.941762 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.941767 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.941771 | orchestrator | 2026-03-11 00:56:54.941776 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-11 00:56:54.941780 | orchestrator | Wednesday 11 March 2026 00:50:14 +0000 (0:00:00.313) 0:03:55.539 ******* 2026-03-11 00:56:54.941785 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.941790 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.941794 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.941800 | orchestrator | 2026-03-11 00:56:54.941805 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-11 00:56:54.941810 | orchestrator | Wednesday 11 March 2026 00:50:15 +0000 (0:00:00.310) 0:03:55.849 ******* 2026-03-11 00:56:54.941834 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.941838 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.941841 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.941844 | orchestrator | 2026-03-11 00:56:54.941847 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-11 00:56:54.941850 | orchestrator | Wednesday 11 March 2026 00:50:16 +0000 (0:00:01.746) 0:03:57.596 ******* 2026-03-11 00:56:54.941854 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.941857 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.941860 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.941863 | orchestrator | 2026-03-11 00:56:54.941866 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-11 00:56:54.941869 | orchestrator | Wednesday 11 March 2026 00:50:18 +0000 (0:00:01.102) 0:03:58.698 ******* 2026-03-11 00:56:54.941872 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.941875 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.941879 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.941882 | orchestrator | 2026-03-11 00:56:54.941885 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-11 00:56:54.941888 | orchestrator | Wednesday 11 March 2026 00:50:18 +0000 (0:00:00.310) 0:03:59.009 ******* 2026-03-11 00:56:54.941895 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.941898 | orchestrator | 2026-03-11 00:56:54.941901 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-11 00:56:54.941904 | orchestrator | Wednesday 11 March 2026 00:50:19 +0000 (0:00:00.775) 0:03:59.785 ******* 2026-03-11 00:56:54.941907 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.941910 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.941913 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.941916 | orchestrator | 2026-03-11 00:56:54.941919 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-11 00:56:54.941922 | orchestrator | Wednesday 11 March 2026 00:50:19 +0000 (0:00:00.304) 0:04:00.089 ******* 2026-03-11 00:56:54.941925 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.941929 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.941932 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.941935 | orchestrator | 2026-03-11 00:56:54.941938 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-11 00:56:54.941941 | orchestrator | Wednesday 11 March 2026 00:50:19 +0000 (0:00:00.348) 0:04:00.437 ******* 2026-03-11 00:56:54.941944 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.941947 | orchestrator | 2026-03-11 00:56:54.941950 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-11 00:56:54.941953 | orchestrator | Wednesday 11 March 2026 00:50:20 +0000 (0:00:00.952) 0:04:01.389 ******* 2026-03-11 00:56:54.941962 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.941965 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.941968 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.941972 | orchestrator | 2026-03-11 00:56:54.941975 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-11 00:56:54.941978 | orchestrator | Wednesday 11 March 2026 00:50:23 +0000 (0:00:02.495) 0:04:03.885 ******* 2026-03-11 00:56:54.941981 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.941984 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.941987 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.941990 | orchestrator | 2026-03-11 00:56:54.941993 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-11 00:56:54.941996 | orchestrator | Wednesday 11 March 2026 00:50:24 +0000 (0:00:01.625) 0:04:05.510 ******* 2026-03-11 00:56:54.941999 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.942002 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.942005 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.942008 | orchestrator | 2026-03-11 00:56:54.942029 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-11 00:56:54.942034 | orchestrator | Wednesday 11 March 2026 00:50:27 +0000 (0:00:02.494) 0:04:08.005 ******* 2026-03-11 00:56:54.942037 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.942040 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.942043 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.942046 | orchestrator | 2026-03-11 00:56:54.942049 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-11 00:56:54.942052 | orchestrator | Wednesday 11 March 2026 00:50:29 +0000 (0:00:02.084) 0:04:10.089 ******* 2026-03-11 00:56:54.942055 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.942058 | orchestrator | 2026-03-11 00:56:54.942061 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-11 00:56:54.942064 | orchestrator | Wednesday 11 March 2026 00:50:29 +0000 (0:00:00.451) 0:04:10.540 ******* 2026-03-11 00:56:54.942068 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.942071 | orchestrator | 2026-03-11 00:56:54.942074 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-11 00:56:54.942079 | orchestrator | Wednesday 11 March 2026 00:50:30 +0000 (0:00:01.032) 0:04:11.572 ******* 2026-03-11 00:56:54.942082 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.942086 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.942089 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.942092 | orchestrator | 2026-03-11 00:56:54.942095 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-11 00:56:54.942098 | orchestrator | Wednesday 11 March 2026 00:50:38 +0000 (0:00:08.043) 0:04:19.616 ******* 2026-03-11 00:56:54.942101 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942104 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.942107 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.942110 | orchestrator | 2026-03-11 00:56:54.942113 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-11 00:56:54.942116 | orchestrator | Wednesday 11 March 2026 00:50:39 +0000 (0:00:00.531) 0:04:20.147 ******* 2026-03-11 00:56:54.942132 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d0126ada8e2c9bb9606e51f86a70fa5bf598dd5b'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-11 00:56:54.942137 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d0126ada8e2c9bb9606e51f86a70fa5bf598dd5b'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-11 00:56:54.942142 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d0126ada8e2c9bb9606e51f86a70fa5bf598dd5b'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-11 00:56:54.942146 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d0126ada8e2c9bb9606e51f86a70fa5bf598dd5b'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-11 00:56:54.942150 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d0126ada8e2c9bb9606e51f86a70fa5bf598dd5b'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-11 00:56:54.942158 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d0126ada8e2c9bb9606e51f86a70fa5bf598dd5b'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__d0126ada8e2c9bb9606e51f86a70fa5bf598dd5b'}])  2026-03-11 00:56:54.942168 | orchestrator | 2026-03-11 00:56:54.942173 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-11 00:56:54.942178 | orchestrator | Wednesday 11 March 2026 00:50:52 +0000 (0:00:13.175) 0:04:33.323 ******* 2026-03-11 00:56:54.942184 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942189 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.942194 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.942199 | orchestrator | 2026-03-11 00:56:54.942204 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-11 00:56:54.942214 | orchestrator | Wednesday 11 March 2026 00:50:53 +0000 (0:00:00.463) 0:04:33.786 ******* 2026-03-11 00:56:54.942220 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.942225 | orchestrator | 2026-03-11 00:56:54.942228 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-11 00:56:54.942231 | orchestrator | Wednesday 11 March 2026 00:50:53 +0000 (0:00:00.861) 0:04:34.647 ******* 2026-03-11 00:56:54.942234 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.942237 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.942240 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.942244 | orchestrator | 2026-03-11 00:56:54.942247 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-11 00:56:54.942250 | orchestrator | Wednesday 11 March 2026 00:50:54 +0000 (0:00:00.300) 0:04:34.947 ******* 2026-03-11 00:56:54.942253 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942256 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.942259 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.942262 | orchestrator | 2026-03-11 00:56:54.942265 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-11 00:56:54.942268 | orchestrator | Wednesday 11 March 2026 00:50:54 +0000 (0:00:00.304) 0:04:35.252 ******* 2026-03-11 00:56:54.942272 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-11 00:56:54.942275 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-11 00:56:54.942278 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-11 00:56:54.942281 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942284 | orchestrator | 2026-03-11 00:56:54.942287 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-11 00:56:54.942291 | orchestrator | Wednesday 11 March 2026 00:50:55 +0000 (0:00:00.708) 0:04:35.961 ******* 2026-03-11 00:56:54.942294 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.942297 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.942300 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.942303 | orchestrator | 2026-03-11 00:56:54.942320 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-11 00:56:54.942324 | orchestrator | 2026-03-11 00:56:54.942327 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-11 00:56:54.942330 | orchestrator | Wednesday 11 March 2026 00:50:55 +0000 (0:00:00.622) 0:04:36.583 ******* 2026-03-11 00:56:54.942333 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.942337 | orchestrator | 2026-03-11 00:56:54.942340 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-11 00:56:54.942343 | orchestrator | Wednesday 11 March 2026 00:50:56 +0000 (0:00:00.385) 0:04:36.968 ******* 2026-03-11 00:56:54.942346 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.942349 | orchestrator | 2026-03-11 00:56:54.942352 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-11 00:56:54.942355 | orchestrator | Wednesday 11 March 2026 00:50:56 +0000 (0:00:00.563) 0:04:37.531 ******* 2026-03-11 00:56:54.942358 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.942361 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.942364 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.942367 | orchestrator | 2026-03-11 00:56:54.942370 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-11 00:56:54.942373 | orchestrator | Wednesday 11 March 2026 00:50:57 +0000 (0:00:00.681) 0:04:38.213 ******* 2026-03-11 00:56:54.942376 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942379 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.942383 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.942386 | orchestrator | 2026-03-11 00:56:54.942389 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-11 00:56:54.942394 | orchestrator | Wednesday 11 March 2026 00:50:57 +0000 (0:00:00.285) 0:04:38.499 ******* 2026-03-11 00:56:54.942397 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.942401 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.942404 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942407 | orchestrator | 2026-03-11 00:56:54.942410 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-11 00:56:54.942413 | orchestrator | Wednesday 11 March 2026 00:50:58 +0000 (0:00:00.491) 0:04:38.990 ******* 2026-03-11 00:56:54.942416 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.942419 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.942422 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942425 | orchestrator | 2026-03-11 00:56:54.942428 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-11 00:56:54.942431 | orchestrator | Wednesday 11 March 2026 00:50:58 +0000 (0:00:00.303) 0:04:39.294 ******* 2026-03-11 00:56:54.942434 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.942437 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.942442 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.942446 | orchestrator | 2026-03-11 00:56:54.942449 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-11 00:56:54.942452 | orchestrator | Wednesday 11 March 2026 00:50:59 +0000 (0:00:00.610) 0:04:39.905 ******* 2026-03-11 00:56:54.942455 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.942458 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942461 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.942464 | orchestrator | 2026-03-11 00:56:54.942467 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-11 00:56:54.942470 | orchestrator | Wednesday 11 March 2026 00:50:59 +0000 (0:00:00.272) 0:04:40.178 ******* 2026-03-11 00:56:54.942473 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.942476 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942479 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.942482 | orchestrator | 2026-03-11 00:56:54.942485 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-11 00:56:54.942488 | orchestrator | Wednesday 11 March 2026 00:50:59 +0000 (0:00:00.430) 0:04:40.608 ******* 2026-03-11 00:56:54.942492 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.942495 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.942498 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.942501 | orchestrator | 2026-03-11 00:56:54.942504 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-11 00:56:54.942507 | orchestrator | Wednesday 11 March 2026 00:51:00 +0000 (0:00:01.011) 0:04:41.620 ******* 2026-03-11 00:56:54.942510 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.942513 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.942516 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.942519 | orchestrator | 2026-03-11 00:56:54.942522 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-11 00:56:54.942525 | orchestrator | Wednesday 11 March 2026 00:51:01 +0000 (0:00:00.698) 0:04:42.318 ******* 2026-03-11 00:56:54.942528 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942531 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.942537 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.942544 | orchestrator | 2026-03-11 00:56:54.942550 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-11 00:56:54.942555 | orchestrator | Wednesday 11 March 2026 00:51:01 +0000 (0:00:00.288) 0:04:42.607 ******* 2026-03-11 00:56:54.942560 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.942565 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.942570 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.942575 | orchestrator | 2026-03-11 00:56:54.942581 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-11 00:56:54.942586 | orchestrator | Wednesday 11 March 2026 00:51:02 +0000 (0:00:00.286) 0:04:42.893 ******* 2026-03-11 00:56:54.942596 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942602 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.942607 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.942610 | orchestrator | 2026-03-11 00:56:54.942613 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-11 00:56:54.942616 | orchestrator | Wednesday 11 March 2026 00:51:02 +0000 (0:00:00.429) 0:04:43.323 ******* 2026-03-11 00:56:54.942620 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942623 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.942640 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.942647 | orchestrator | 2026-03-11 00:56:54.942653 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-11 00:56:54.942658 | orchestrator | Wednesday 11 March 2026 00:51:02 +0000 (0:00:00.264) 0:04:43.587 ******* 2026-03-11 00:56:54.942663 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942668 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.942672 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.942677 | orchestrator | 2026-03-11 00:56:54.942692 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-11 00:56:54.942697 | orchestrator | Wednesday 11 March 2026 00:51:03 +0000 (0:00:00.273) 0:04:43.860 ******* 2026-03-11 00:56:54.942702 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942707 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.942712 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.942718 | orchestrator | 2026-03-11 00:56:54.942723 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-11 00:56:54.942728 | orchestrator | Wednesday 11 March 2026 00:51:03 +0000 (0:00:00.259) 0:04:44.120 ******* 2026-03-11 00:56:54.942733 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942738 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.942741 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.942744 | orchestrator | 2026-03-11 00:56:54.942747 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-11 00:56:54.942750 | orchestrator | Wednesday 11 March 2026 00:51:03 +0000 (0:00:00.408) 0:04:44.529 ******* 2026-03-11 00:56:54.942753 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.942757 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.942760 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.942763 | orchestrator | 2026-03-11 00:56:54.942766 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-11 00:56:54.942769 | orchestrator | Wednesday 11 March 2026 00:51:04 +0000 (0:00:00.325) 0:04:44.855 ******* 2026-03-11 00:56:54.942772 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.942776 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.942779 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.942782 | orchestrator | 2026-03-11 00:56:54.942785 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-11 00:56:54.942788 | orchestrator | Wednesday 11 March 2026 00:51:04 +0000 (0:00:00.360) 0:04:45.215 ******* 2026-03-11 00:56:54.942791 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.942795 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.942798 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.942801 | orchestrator | 2026-03-11 00:56:54.942804 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-11 00:56:54.942807 | orchestrator | Wednesday 11 March 2026 00:51:05 +0000 (0:00:00.615) 0:04:45.831 ******* 2026-03-11 00:56:54.942810 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-11 00:56:54.942813 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:56:54.942817 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:56:54.942820 | orchestrator | 2026-03-11 00:56:54.942825 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-11 00:56:54.942835 | orchestrator | Wednesday 11 March 2026 00:51:05 +0000 (0:00:00.657) 0:04:46.488 ******* 2026-03-11 00:56:54.942840 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.942845 | orchestrator | 2026-03-11 00:56:54.942850 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-11 00:56:54.942916 | orchestrator | Wednesday 11 March 2026 00:51:06 +0000 (0:00:00.482) 0:04:46.971 ******* 2026-03-11 00:56:54.942937 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.942942 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.942948 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.942953 | orchestrator | 2026-03-11 00:56:54.942958 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-11 00:56:54.942963 | orchestrator | Wednesday 11 March 2026 00:51:07 +0000 (0:00:00.694) 0:04:47.665 ******* 2026-03-11 00:56:54.942967 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.942971 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.942976 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.942981 | orchestrator | 2026-03-11 00:56:54.942986 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-11 00:56:54.942990 | orchestrator | Wednesday 11 March 2026 00:51:07 +0000 (0:00:00.586) 0:04:48.252 ******* 2026-03-11 00:56:54.942996 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 00:56:54.943001 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 00:56:54.943007 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 00:56:54.943012 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-11 00:56:54.943017 | orchestrator | 2026-03-11 00:56:54.943023 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-11 00:56:54.943028 | orchestrator | Wednesday 11 March 2026 00:51:16 +0000 (0:00:09.395) 0:04:57.647 ******* 2026-03-11 00:56:54.943033 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.943038 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.943043 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.943049 | orchestrator | 2026-03-11 00:56:54.943052 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-11 00:56:54.943055 | orchestrator | Wednesday 11 March 2026 00:51:17 +0000 (0:00:00.350) 0:04:57.998 ******* 2026-03-11 00:56:54.943058 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-11 00:56:54.943061 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-11 00:56:54.943065 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-11 00:56:54.943068 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-11 00:56:54.943071 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:54.943074 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:54.943077 | orchestrator | 2026-03-11 00:56:54.943101 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-11 00:56:54.943104 | orchestrator | Wednesday 11 March 2026 00:51:19 +0000 (0:00:02.391) 0:05:00.390 ******* 2026-03-11 00:56:54.943107 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-11 00:56:54.943111 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-11 00:56:54.943114 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-11 00:56:54.943117 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 00:56:54.943120 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-11 00:56:54.943123 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-11 00:56:54.943126 | orchestrator | 2026-03-11 00:56:54.943129 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-11 00:56:54.943132 | orchestrator | Wednesday 11 March 2026 00:51:21 +0000 (0:00:01.485) 0:05:01.875 ******* 2026-03-11 00:56:54.943135 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.943138 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.943142 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.943149 | orchestrator | 2026-03-11 00:56:54.943153 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-11 00:56:54.943159 | orchestrator | Wednesday 11 March 2026 00:51:22 +0000 (0:00:01.149) 0:05:03.025 ******* 2026-03-11 00:56:54.943167 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.943172 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.943177 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.943182 | orchestrator | 2026-03-11 00:56:54.943187 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-11 00:56:54.943191 | orchestrator | Wednesday 11 March 2026 00:51:22 +0000 (0:00:00.305) 0:05:03.330 ******* 2026-03-11 00:56:54.943197 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.943202 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.943207 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.943213 | orchestrator | 2026-03-11 00:56:54.943216 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-11 00:56:54.943220 | orchestrator | Wednesday 11 March 2026 00:51:22 +0000 (0:00:00.304) 0:05:03.635 ******* 2026-03-11 00:56:54.943223 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.943226 | orchestrator | 2026-03-11 00:56:54.943229 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-11 00:56:54.943232 | orchestrator | Wednesday 11 March 2026 00:51:23 +0000 (0:00:00.760) 0:05:04.395 ******* 2026-03-11 00:56:54.943235 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.943239 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.943242 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.943245 | orchestrator | 2026-03-11 00:56:54.943248 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-11 00:56:54.943253 | orchestrator | Wednesday 11 March 2026 00:51:24 +0000 (0:00:00.339) 0:05:04.735 ******* 2026-03-11 00:56:54.943257 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.943260 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.943263 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.943266 | orchestrator | 2026-03-11 00:56:54.943269 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-11 00:56:54.943272 | orchestrator | Wednesday 11 March 2026 00:51:24 +0000 (0:00:00.346) 0:05:05.082 ******* 2026-03-11 00:56:54.943276 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.943279 | orchestrator | 2026-03-11 00:56:54.943282 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-11 00:56:54.943285 | orchestrator | Wednesday 11 March 2026 00:51:25 +0000 (0:00:00.810) 0:05:05.892 ******* 2026-03-11 00:56:54.943288 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.943291 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.943294 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.943298 | orchestrator | 2026-03-11 00:56:54.943301 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-11 00:56:54.943304 | orchestrator | Wednesday 11 March 2026 00:51:26 +0000 (0:00:01.197) 0:05:07.090 ******* 2026-03-11 00:56:54.943307 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.943310 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.943313 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.943316 | orchestrator | 2026-03-11 00:56:54.943319 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-11 00:56:54.943322 | orchestrator | Wednesday 11 March 2026 00:51:27 +0000 (0:00:01.086) 0:05:08.177 ******* 2026-03-11 00:56:54.943325 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.943329 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.943332 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.943335 | orchestrator | 2026-03-11 00:56:54.943338 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-11 00:56:54.943346 | orchestrator | Wednesday 11 March 2026 00:51:29 +0000 (0:00:01.830) 0:05:10.008 ******* 2026-03-11 00:56:54.943349 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.943352 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.943355 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.943358 | orchestrator | 2026-03-11 00:56:54.943361 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-11 00:56:54.943364 | orchestrator | Wednesday 11 March 2026 00:51:31 +0000 (0:00:01.927) 0:05:11.935 ******* 2026-03-11 00:56:54.943367 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.943370 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.943373 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-11 00:56:54.943377 | orchestrator | 2026-03-11 00:56:54.943380 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-11 00:56:54.943383 | orchestrator | Wednesday 11 March 2026 00:51:31 +0000 (0:00:00.700) 0:05:12.635 ******* 2026-03-11 00:56:54.943386 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-11 00:56:54.943403 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-11 00:56:54.943407 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-11 00:56:54.943410 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-11 00:56:54.943413 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-11 00:56:54.943416 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-03-11 00:56:54.943419 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:54.943422 | orchestrator | 2026-03-11 00:56:54.943426 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-11 00:56:54.943429 | orchestrator | Wednesday 11 March 2026 00:52:07 +0000 (0:00:35.894) 0:05:48.530 ******* 2026-03-11 00:56:54.943432 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:54.943435 | orchestrator | 2026-03-11 00:56:54.943438 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-11 00:56:54.943441 | orchestrator | Wednesday 11 March 2026 00:52:09 +0000 (0:00:01.356) 0:05:49.886 ******* 2026-03-11 00:56:54.943444 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.943447 | orchestrator | 2026-03-11 00:56:54.943450 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-11 00:56:54.943453 | orchestrator | Wednesday 11 March 2026 00:52:09 +0000 (0:00:00.354) 0:05:50.241 ******* 2026-03-11 00:56:54.943456 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.943459 | orchestrator | 2026-03-11 00:56:54.943462 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-11 00:56:54.943465 | orchestrator | Wednesday 11 March 2026 00:52:09 +0000 (0:00:00.151) 0:05:50.392 ******* 2026-03-11 00:56:54.943468 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-11 00:56:54.943472 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-11 00:56:54.943475 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-11 00:56:54.943478 | orchestrator | 2026-03-11 00:56:54.943481 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-11 00:56:54.943484 | orchestrator | Wednesday 11 March 2026 00:52:16 +0000 (0:00:06.494) 0:05:56.887 ******* 2026-03-11 00:56:54.943487 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-11 00:56:54.943490 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-11 00:56:54.943495 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-11 00:56:54.943501 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-11 00:56:54.943504 | orchestrator | 2026-03-11 00:56:54.943507 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-11 00:56:54.943510 | orchestrator | Wednesday 11 March 2026 00:52:21 +0000 (0:00:05.681) 0:06:02.568 ******* 2026-03-11 00:56:54.943513 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.943516 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.943519 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.943522 | orchestrator | 2026-03-11 00:56:54.943525 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-11 00:56:54.943528 | orchestrator | Wednesday 11 March 2026 00:52:22 +0000 (0:00:00.771) 0:06:03.339 ******* 2026-03-11 00:56:54.943531 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.943534 | orchestrator | 2026-03-11 00:56:54.943537 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-11 00:56:54.943541 | orchestrator | Wednesday 11 March 2026 00:52:23 +0000 (0:00:00.565) 0:06:03.905 ******* 2026-03-11 00:56:54.943544 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.943547 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.943550 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.943553 | orchestrator | 2026-03-11 00:56:54.943556 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-11 00:56:54.943559 | orchestrator | Wednesday 11 March 2026 00:52:23 +0000 (0:00:00.567) 0:06:04.473 ******* 2026-03-11 00:56:54.943562 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.943565 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.943568 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.943571 | orchestrator | 2026-03-11 00:56:54.943574 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-11 00:56:54.943577 | orchestrator | Wednesday 11 March 2026 00:52:24 +0000 (0:00:01.162) 0:06:05.635 ******* 2026-03-11 00:56:54.943580 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-11 00:56:54.943583 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-11 00:56:54.943586 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-11 00:56:54.943589 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.943592 | orchestrator | 2026-03-11 00:56:54.943596 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-11 00:56:54.943599 | orchestrator | Wednesday 11 March 2026 00:52:25 +0000 (0:00:00.695) 0:06:06.331 ******* 2026-03-11 00:56:54.943602 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.943605 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.943608 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.943611 | orchestrator | 2026-03-11 00:56:54.943614 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-11 00:56:54.943617 | orchestrator | 2026-03-11 00:56:54.943620 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-11 00:56:54.943623 | orchestrator | Wednesday 11 March 2026 00:52:26 +0000 (0:00:00.810) 0:06:07.141 ******* 2026-03-11 00:56:54.943635 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.943639 | orchestrator | 2026-03-11 00:56:54.943642 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-11 00:56:54.943645 | orchestrator | Wednesday 11 March 2026 00:52:26 +0000 (0:00:00.498) 0:06:07.639 ******* 2026-03-11 00:56:54.943648 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.943651 | orchestrator | 2026-03-11 00:56:54.943654 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-11 00:56:54.943658 | orchestrator | Wednesday 11 March 2026 00:52:27 +0000 (0:00:00.700) 0:06:08.339 ******* 2026-03-11 00:56:54.943663 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.943667 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.943670 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.943673 | orchestrator | 2026-03-11 00:56:54.943676 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-11 00:56:54.943689 | orchestrator | Wednesday 11 March 2026 00:52:27 +0000 (0:00:00.302) 0:06:08.642 ******* 2026-03-11 00:56:54.943693 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.943696 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.943699 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.943702 | orchestrator | 2026-03-11 00:56:54.943705 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-11 00:56:54.943708 | orchestrator | Wednesday 11 March 2026 00:52:28 +0000 (0:00:00.687) 0:06:09.329 ******* 2026-03-11 00:56:54.943711 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.943715 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.943718 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.943721 | orchestrator | 2026-03-11 00:56:54.943724 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-11 00:56:54.943727 | orchestrator | Wednesday 11 March 2026 00:52:29 +0000 (0:00:00.711) 0:06:10.040 ******* 2026-03-11 00:56:54.943730 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.943734 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.943738 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.943743 | orchestrator | 2026-03-11 00:56:54.943748 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-11 00:56:54.943753 | orchestrator | Wednesday 11 March 2026 00:52:30 +0000 (0:00:00.992) 0:06:11.033 ******* 2026-03-11 00:56:54.943758 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.943762 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.943765 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.943768 | orchestrator | 2026-03-11 00:56:54.943771 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-11 00:56:54.943775 | orchestrator | Wednesday 11 March 2026 00:52:30 +0000 (0:00:00.328) 0:06:11.362 ******* 2026-03-11 00:56:54.943778 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.943784 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.943787 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.943790 | orchestrator | 2026-03-11 00:56:54.943793 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-11 00:56:54.943796 | orchestrator | Wednesday 11 March 2026 00:52:31 +0000 (0:00:00.305) 0:06:11.668 ******* 2026-03-11 00:56:54.943799 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.943802 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.943806 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.943809 | orchestrator | 2026-03-11 00:56:54.943812 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-11 00:56:54.943815 | orchestrator | Wednesday 11 March 2026 00:52:31 +0000 (0:00:00.302) 0:06:11.971 ******* 2026-03-11 00:56:54.943818 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.943821 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.943824 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.943827 | orchestrator | 2026-03-11 00:56:54.943830 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-11 00:56:54.943833 | orchestrator | Wednesday 11 March 2026 00:52:32 +0000 (0:00:00.732) 0:06:12.703 ******* 2026-03-11 00:56:54.943837 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.943840 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.943843 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.943846 | orchestrator | 2026-03-11 00:56:54.943849 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-11 00:56:54.943852 | orchestrator | Wednesday 11 March 2026 00:52:33 +0000 (0:00:01.136) 0:06:13.839 ******* 2026-03-11 00:56:54.943855 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.943858 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.943864 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.943867 | orchestrator | 2026-03-11 00:56:54.943870 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-11 00:56:54.943873 | orchestrator | Wednesday 11 March 2026 00:52:33 +0000 (0:00:00.316) 0:06:14.156 ******* 2026-03-11 00:56:54.943876 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.943879 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.943882 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.943886 | orchestrator | 2026-03-11 00:56:54.943891 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-11 00:56:54.943897 | orchestrator | Wednesday 11 March 2026 00:52:33 +0000 (0:00:00.294) 0:06:14.451 ******* 2026-03-11 00:56:54.943902 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.943906 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.943911 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.943916 | orchestrator | 2026-03-11 00:56:54.943922 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-11 00:56:54.943927 | orchestrator | Wednesday 11 March 2026 00:52:34 +0000 (0:00:00.322) 0:06:14.774 ******* 2026-03-11 00:56:54.943931 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.943936 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.943940 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.943944 | orchestrator | 2026-03-11 00:56:54.943948 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-11 00:56:54.943953 | orchestrator | Wednesday 11 March 2026 00:52:34 +0000 (0:00:00.607) 0:06:15.382 ******* 2026-03-11 00:56:54.943958 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.943963 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.943982 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.943988 | orchestrator | 2026-03-11 00:56:54.943993 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-11 00:56:54.943997 | orchestrator | Wednesday 11 March 2026 00:52:35 +0000 (0:00:00.323) 0:06:15.705 ******* 2026-03-11 00:56:54.944001 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.944006 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.944010 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.944015 | orchestrator | 2026-03-11 00:56:54.944020 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-11 00:56:54.944024 | orchestrator | Wednesday 11 March 2026 00:52:35 +0000 (0:00:00.328) 0:06:16.034 ******* 2026-03-11 00:56:54.944029 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.944034 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.944039 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.944044 | orchestrator | 2026-03-11 00:56:54.944049 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-11 00:56:54.944053 | orchestrator | Wednesday 11 March 2026 00:52:35 +0000 (0:00:00.303) 0:06:16.337 ******* 2026-03-11 00:56:54.944058 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.944062 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.944067 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.944071 | orchestrator | 2026-03-11 00:56:54.944076 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-11 00:56:54.944080 | orchestrator | Wednesday 11 March 2026 00:52:36 +0000 (0:00:00.565) 0:06:16.903 ******* 2026-03-11 00:56:54.944085 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.944090 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.944094 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.944099 | orchestrator | 2026-03-11 00:56:54.944104 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-11 00:56:54.944108 | orchestrator | Wednesday 11 March 2026 00:52:36 +0000 (0:00:00.335) 0:06:17.238 ******* 2026-03-11 00:56:54.944113 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.944118 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.944122 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.944127 | orchestrator | 2026-03-11 00:56:54.944136 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-11 00:56:54.944141 | orchestrator | Wednesday 11 March 2026 00:52:37 +0000 (0:00:00.540) 0:06:17.779 ******* 2026-03-11 00:56:54.944145 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.944150 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.944155 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.944160 | orchestrator | 2026-03-11 00:56:54.944165 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-11 00:56:54.944170 | orchestrator | Wednesday 11 March 2026 00:52:37 +0000 (0:00:00.775) 0:06:18.554 ******* 2026-03-11 00:56:54.944175 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:56:54.944183 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:56:54.944187 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:56:54.944192 | orchestrator | 2026-03-11 00:56:54.944196 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-11 00:56:54.944201 | orchestrator | Wednesday 11 March 2026 00:52:38 +0000 (0:00:00.705) 0:06:19.260 ******* 2026-03-11 00:56:54.944205 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.944210 | orchestrator | 2026-03-11 00:56:54.944215 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-11 00:56:54.944219 | orchestrator | Wednesday 11 March 2026 00:52:39 +0000 (0:00:00.562) 0:06:19.822 ******* 2026-03-11 00:56:54.944224 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.944228 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.944233 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.944237 | orchestrator | 2026-03-11 00:56:54.944243 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-11 00:56:54.944248 | orchestrator | Wednesday 11 March 2026 00:52:39 +0000 (0:00:00.556) 0:06:20.378 ******* 2026-03-11 00:56:54.944254 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.944259 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.944265 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.944269 | orchestrator | 2026-03-11 00:56:54.944274 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-11 00:56:54.944279 | orchestrator | Wednesday 11 March 2026 00:52:40 +0000 (0:00:00.319) 0:06:20.697 ******* 2026-03-11 00:56:54.944284 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.944289 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.944293 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.944298 | orchestrator | 2026-03-11 00:56:54.944302 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-11 00:56:54.944308 | orchestrator | Wednesday 11 March 2026 00:52:40 +0000 (0:00:00.613) 0:06:21.311 ******* 2026-03-11 00:56:54.944313 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.944318 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.944322 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.944327 | orchestrator | 2026-03-11 00:56:54.944332 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-11 00:56:54.944337 | orchestrator | Wednesday 11 March 2026 00:52:41 +0000 (0:00:00.376) 0:06:21.688 ******* 2026-03-11 00:56:54.944341 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-11 00:56:54.944347 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-11 00:56:54.944351 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-11 00:56:54.944356 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-11 00:56:54.944362 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-11 00:56:54.944377 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-11 00:56:54.944382 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-11 00:56:54.944387 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-11 00:56:54.944392 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-11 00:56:54.944397 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-11 00:56:54.944402 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-11 00:56:54.944408 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-11 00:56:54.944412 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-11 00:56:54.944415 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-11 00:56:54.944418 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-11 00:56:54.944421 | orchestrator | 2026-03-11 00:56:54.944424 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-11 00:56:54.944428 | orchestrator | Wednesday 11 March 2026 00:52:44 +0000 (0:00:03.576) 0:06:25.265 ******* 2026-03-11 00:56:54.944431 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.944434 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.944437 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.944440 | orchestrator | 2026-03-11 00:56:54.944443 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-11 00:56:54.944446 | orchestrator | Wednesday 11 March 2026 00:52:44 +0000 (0:00:00.379) 0:06:25.644 ******* 2026-03-11 00:56:54.944449 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.944452 | orchestrator | 2026-03-11 00:56:54.944455 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-11 00:56:54.944459 | orchestrator | Wednesday 11 March 2026 00:52:45 +0000 (0:00:00.577) 0:06:26.222 ******* 2026-03-11 00:56:54.944462 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-11 00:56:54.944465 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-11 00:56:54.944468 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-11 00:56:54.944471 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-11 00:56:54.944477 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-11 00:56:54.944480 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-11 00:56:54.944483 | orchestrator | 2026-03-11 00:56:54.944486 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-11 00:56:54.944489 | orchestrator | Wednesday 11 March 2026 00:52:47 +0000 (0:00:01.491) 0:06:27.714 ******* 2026-03-11 00:56:54.944492 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:54.944495 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-11 00:56:54.944498 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-11 00:56:54.944501 | orchestrator | 2026-03-11 00:56:54.944504 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-11 00:56:54.944507 | orchestrator | Wednesday 11 March 2026 00:52:49 +0000 (0:00:02.095) 0:06:29.809 ******* 2026-03-11 00:56:54.944511 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-11 00:56:54.944514 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-11 00:56:54.944517 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.944520 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-11 00:56:54.944523 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-11 00:56:54.944526 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.944532 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-11 00:56:54.944535 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-11 00:56:54.944538 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.944541 | orchestrator | 2026-03-11 00:56:54.944545 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-11 00:56:54.944548 | orchestrator | Wednesday 11 March 2026 00:52:50 +0000 (0:00:01.294) 0:06:31.104 ******* 2026-03-11 00:56:54.944551 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:54.944554 | orchestrator | 2026-03-11 00:56:54.944557 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-11 00:56:54.944560 | orchestrator | Wednesday 11 March 2026 00:52:52 +0000 (0:00:02.136) 0:06:33.240 ******* 2026-03-11 00:56:54.944563 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.944566 | orchestrator | 2026-03-11 00:56:54.944570 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-11 00:56:54.944573 | orchestrator | Wednesday 11 March 2026 00:52:53 +0000 (0:00:00.541) 0:06:33.782 ******* 2026-03-11 00:56:54.944576 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9a64462a-5614-5a25-979d-2f017565a0c4', 'data_vg': 'ceph-9a64462a-5614-5a25-979d-2f017565a0c4'}) 2026-03-11 00:56:54.944580 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1f24027a-cb62-5112-a2b4-0ff1a158a780', 'data_vg': 'ceph-1f24027a-cb62-5112-a2b4-0ff1a158a780'}) 2026-03-11 00:56:54.944588 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9', 'data_vg': 'ceph-5d149e3f-abc8-57c5-b2f4-c991fc87e4f9'}) 2026-03-11 00:56:54.944591 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-930a51f3-082d-5f24-af57-1314a0ff4b68', 'data_vg': 'ceph-930a51f3-082d-5f24-af57-1314a0ff4b68'}) 2026-03-11 00:56:54.944594 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9e6773b3-a2d9-5476-8e14-434a68284534', 'data_vg': 'ceph-9e6773b3-a2d9-5476-8e14-434a68284534'}) 2026-03-11 00:56:54.944598 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-12aec0f2-63b1-5667-a447-7095f264ece1', 'data_vg': 'ceph-12aec0f2-63b1-5667-a447-7095f264ece1'}) 2026-03-11 00:56:54.944601 | orchestrator | 2026-03-11 00:56:54.944604 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-11 00:56:54.944607 | orchestrator | Wednesday 11 March 2026 00:53:40 +0000 (0:00:47.030) 0:07:20.812 ******* 2026-03-11 00:56:54.944610 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.944613 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.944616 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.944619 | orchestrator | 2026-03-11 00:56:54.944622 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-11 00:56:54.944626 | orchestrator | Wednesday 11 March 2026 00:53:40 +0000 (0:00:00.311) 0:07:21.124 ******* 2026-03-11 00:56:54.944629 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.944632 | orchestrator | 2026-03-11 00:56:54.944635 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-11 00:56:54.944638 | orchestrator | Wednesday 11 March 2026 00:53:41 +0000 (0:00:00.572) 0:07:21.697 ******* 2026-03-11 00:56:54.944641 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.944644 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.944647 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.944650 | orchestrator | 2026-03-11 00:56:54.944653 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-11 00:56:54.944656 | orchestrator | Wednesday 11 March 2026 00:53:42 +0000 (0:00:01.056) 0:07:22.753 ******* 2026-03-11 00:56:54.944659 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.944663 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.944666 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.944673 | orchestrator | 2026-03-11 00:56:54.944676 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-11 00:56:54.944765 | orchestrator | Wednesday 11 March 2026 00:53:45 +0000 (0:00:03.011) 0:07:25.765 ******* 2026-03-11 00:56:54.944773 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.944776 | orchestrator | 2026-03-11 00:56:54.944782 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-11 00:56:54.944786 | orchestrator | Wednesday 11 March 2026 00:53:45 +0000 (0:00:00.647) 0:07:26.412 ******* 2026-03-11 00:56:54.944789 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.944792 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.944795 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.944798 | orchestrator | 2026-03-11 00:56:54.944802 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-11 00:56:54.944805 | orchestrator | Wednesday 11 March 2026 00:53:47 +0000 (0:00:01.578) 0:07:27.991 ******* 2026-03-11 00:56:54.944808 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.944815 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.944819 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.944822 | orchestrator | 2026-03-11 00:56:54.944825 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-11 00:56:54.944828 | orchestrator | Wednesday 11 March 2026 00:53:48 +0000 (0:00:01.202) 0:07:29.193 ******* 2026-03-11 00:56:54.944831 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.944834 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.944837 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.944840 | orchestrator | 2026-03-11 00:56:54.944843 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-11 00:56:54.944847 | orchestrator | Wednesday 11 March 2026 00:53:50 +0000 (0:00:01.992) 0:07:31.185 ******* 2026-03-11 00:56:54.944850 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.944853 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.944856 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.944859 | orchestrator | 2026-03-11 00:56:54.944862 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-11 00:56:54.944865 | orchestrator | Wednesday 11 March 2026 00:53:50 +0000 (0:00:00.288) 0:07:31.474 ******* 2026-03-11 00:56:54.944868 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.944871 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.944874 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.944877 | orchestrator | 2026-03-11 00:56:54.944880 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-11 00:56:54.944883 | orchestrator | Wednesday 11 March 2026 00:53:51 +0000 (0:00:00.473) 0:07:31.948 ******* 2026-03-11 00:56:54.944887 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-11 00:56:54.944890 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-11 00:56:54.944893 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-11 00:56:54.944896 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-11 00:56:54.944899 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-03-11 00:56:54.944902 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-11 00:56:54.944905 | orchestrator | 2026-03-11 00:56:54.944908 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-11 00:56:54.944911 | orchestrator | Wednesday 11 March 2026 00:53:52 +0000 (0:00:01.199) 0:07:33.147 ******* 2026-03-11 00:56:54.944914 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-11 00:56:54.944917 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-11 00:56:54.944920 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-11 00:56:54.944923 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-11 00:56:54.944931 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-11 00:56:54.944934 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-11 00:56:54.944937 | orchestrator | 2026-03-11 00:56:54.944941 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-11 00:56:54.944948 | orchestrator | Wednesday 11 March 2026 00:53:54 +0000 (0:00:02.466) 0:07:35.614 ******* 2026-03-11 00:56:54.944951 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-11 00:56:54.944954 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-11 00:56:54.944957 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-11 00:56:54.944960 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-11 00:56:54.944963 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-11 00:56:54.944966 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-11 00:56:54.944969 | orchestrator | 2026-03-11 00:56:54.944972 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-11 00:56:54.944975 | orchestrator | Wednesday 11 March 2026 00:53:58 +0000 (0:00:03.680) 0:07:39.295 ******* 2026-03-11 00:56:54.944978 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.944981 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.944985 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:54.944988 | orchestrator | 2026-03-11 00:56:54.944991 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-11 00:56:54.944994 | orchestrator | Wednesday 11 March 2026 00:54:01 +0000 (0:00:03.236) 0:07:42.531 ******* 2026-03-11 00:56:54.944997 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945000 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.945003 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-11 00:56:54.945006 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:54.945009 | orchestrator | 2026-03-11 00:56:54.945012 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-11 00:56:54.945015 | orchestrator | Wednesday 11 March 2026 00:54:13 +0000 (0:00:12.121) 0:07:54.653 ******* 2026-03-11 00:56:54.945018 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945022 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.945025 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.945028 | orchestrator | 2026-03-11 00:56:54.945031 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-11 00:56:54.945034 | orchestrator | Wednesday 11 March 2026 00:54:15 +0000 (0:00:01.137) 0:07:55.791 ******* 2026-03-11 00:56:54.945037 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945040 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.945043 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.945046 | orchestrator | 2026-03-11 00:56:54.945049 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-11 00:56:54.945054 | orchestrator | Wednesday 11 March 2026 00:54:15 +0000 (0:00:00.430) 0:07:56.221 ******* 2026-03-11 00:56:54.945057 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.945060 | orchestrator | 2026-03-11 00:56:54.945063 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-11 00:56:54.945067 | orchestrator | Wednesday 11 March 2026 00:54:16 +0000 (0:00:00.570) 0:07:56.791 ******* 2026-03-11 00:56:54.945070 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:54.945073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:54.945076 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:54.945079 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945082 | orchestrator | 2026-03-11 00:56:54.945085 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-11 00:56:54.945088 | orchestrator | Wednesday 11 March 2026 00:54:17 +0000 (0:00:00.904) 0:07:57.695 ******* 2026-03-11 00:56:54.945091 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945094 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.945097 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.945103 | orchestrator | 2026-03-11 00:56:54.945106 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-11 00:56:54.945109 | orchestrator | Wednesday 11 March 2026 00:54:17 +0000 (0:00:00.302) 0:07:57.998 ******* 2026-03-11 00:56:54.945112 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945115 | orchestrator | 2026-03-11 00:56:54.945118 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-11 00:56:54.945121 | orchestrator | Wednesday 11 March 2026 00:54:17 +0000 (0:00:00.220) 0:07:58.218 ******* 2026-03-11 00:56:54.945125 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945128 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.945131 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.945134 | orchestrator | 2026-03-11 00:56:54.945137 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-11 00:56:54.945140 | orchestrator | Wednesday 11 March 2026 00:54:17 +0000 (0:00:00.306) 0:07:58.525 ******* 2026-03-11 00:56:54.945143 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945146 | orchestrator | 2026-03-11 00:56:54.945149 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-11 00:56:54.945158 | orchestrator | Wednesday 11 March 2026 00:54:18 +0000 (0:00:00.211) 0:07:58.736 ******* 2026-03-11 00:56:54.945161 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945164 | orchestrator | 2026-03-11 00:56:54.945167 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-11 00:56:54.945170 | orchestrator | Wednesday 11 March 2026 00:54:18 +0000 (0:00:00.202) 0:07:58.938 ******* 2026-03-11 00:56:54.945176 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945179 | orchestrator | 2026-03-11 00:56:54.945183 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-11 00:56:54.945186 | orchestrator | Wednesday 11 March 2026 00:54:18 +0000 (0:00:00.128) 0:07:59.066 ******* 2026-03-11 00:56:54.945189 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945192 | orchestrator | 2026-03-11 00:56:54.945197 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-11 00:56:54.945200 | orchestrator | Wednesday 11 March 2026 00:54:18 +0000 (0:00:00.210) 0:07:59.277 ******* 2026-03-11 00:56:54.945203 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945206 | orchestrator | 2026-03-11 00:56:54.945209 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-11 00:56:54.945212 | orchestrator | Wednesday 11 March 2026 00:54:18 +0000 (0:00:00.237) 0:07:59.514 ******* 2026-03-11 00:56:54.945215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:54.945218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:54.945221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:54.945225 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945228 | orchestrator | 2026-03-11 00:56:54.945231 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-11 00:56:54.945234 | orchestrator | Wednesday 11 March 2026 00:54:19 +0000 (0:00:00.972) 0:08:00.486 ******* 2026-03-11 00:56:54.945237 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945240 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.945243 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.945246 | orchestrator | 2026-03-11 00:56:54.945249 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-11 00:56:54.945252 | orchestrator | Wednesday 11 March 2026 00:54:20 +0000 (0:00:00.313) 0:08:00.800 ******* 2026-03-11 00:56:54.945255 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945258 | orchestrator | 2026-03-11 00:56:54.945261 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-11 00:56:54.945264 | orchestrator | Wednesday 11 March 2026 00:54:20 +0000 (0:00:00.284) 0:08:01.084 ******* 2026-03-11 00:56:54.945268 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945271 | orchestrator | 2026-03-11 00:56:54.945276 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-11 00:56:54.945279 | orchestrator | 2026-03-11 00:56:54.945282 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-11 00:56:54.945285 | orchestrator | Wednesday 11 March 2026 00:54:21 +0000 (0:00:00.622) 0:08:01.707 ******* 2026-03-11 00:56:54.945288 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.945292 | orchestrator | 2026-03-11 00:56:54.945295 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-11 00:56:54.945298 | orchestrator | Wednesday 11 March 2026 00:54:22 +0000 (0:00:01.210) 0:08:02.918 ******* 2026-03-11 00:56:54.945303 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.945306 | orchestrator | 2026-03-11 00:56:54.945309 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-11 00:56:54.945312 | orchestrator | Wednesday 11 March 2026 00:54:23 +0000 (0:00:01.224) 0:08:04.142 ******* 2026-03-11 00:56:54.945315 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945318 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.945321 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.945324 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.945328 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.945331 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.945334 | orchestrator | 2026-03-11 00:56:54.945337 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-11 00:56:54.945340 | orchestrator | Wednesday 11 March 2026 00:54:24 +0000 (0:00:01.246) 0:08:05.389 ******* 2026-03-11 00:56:54.945343 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.945346 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.945349 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.945352 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.945355 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.945358 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.945361 | orchestrator | 2026-03-11 00:56:54.945365 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-11 00:56:54.945368 | orchestrator | Wednesday 11 March 2026 00:54:25 +0000 (0:00:00.821) 0:08:06.210 ******* 2026-03-11 00:56:54.945371 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.945374 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.945377 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.945380 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.945383 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.945386 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.945389 | orchestrator | 2026-03-11 00:56:54.945392 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-11 00:56:54.945395 | orchestrator | Wednesday 11 March 2026 00:54:26 +0000 (0:00:01.012) 0:08:07.222 ******* 2026-03-11 00:56:54.945398 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.945401 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.945405 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.945408 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.945411 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.945414 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.945417 | orchestrator | 2026-03-11 00:56:54.945420 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-11 00:56:54.945423 | orchestrator | Wednesday 11 March 2026 00:54:27 +0000 (0:00:00.752) 0:08:07.975 ******* 2026-03-11 00:56:54.945426 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945429 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.945432 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.945435 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.945438 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.945443 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.945446 | orchestrator | 2026-03-11 00:56:54.945449 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-11 00:56:54.945452 | orchestrator | Wednesday 11 March 2026 00:54:28 +0000 (0:00:01.262) 0:08:09.237 ******* 2026-03-11 00:56:54.945456 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945459 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.945464 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.945467 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.945470 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.945473 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.945476 | orchestrator | 2026-03-11 00:56:54.945479 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-11 00:56:54.945482 | orchestrator | Wednesday 11 March 2026 00:54:29 +0000 (0:00:00.570) 0:08:09.807 ******* 2026-03-11 00:56:54.945486 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945489 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.945492 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.945495 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.945498 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.945501 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.945504 | orchestrator | 2026-03-11 00:56:54.945507 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-11 00:56:54.945510 | orchestrator | Wednesday 11 March 2026 00:54:29 +0000 (0:00:00.826) 0:08:10.633 ******* 2026-03-11 00:56:54.945513 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.945516 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.945519 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.945522 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.945525 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.945528 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.945532 | orchestrator | 2026-03-11 00:56:54.945535 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-11 00:56:54.945538 | orchestrator | Wednesday 11 March 2026 00:54:31 +0000 (0:00:01.076) 0:08:11.710 ******* 2026-03-11 00:56:54.945541 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.945544 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.945547 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.945550 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.945553 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.945556 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.945559 | orchestrator | 2026-03-11 00:56:54.945562 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-11 00:56:54.945565 | orchestrator | Wednesday 11 March 2026 00:54:32 +0000 (0:00:01.322) 0:08:13.032 ******* 2026-03-11 00:56:54.945568 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945571 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.945574 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.945577 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.945580 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.945583 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.945587 | orchestrator | 2026-03-11 00:56:54.945590 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-11 00:56:54.945593 | orchestrator | Wednesday 11 March 2026 00:54:32 +0000 (0:00:00.505) 0:08:13.538 ******* 2026-03-11 00:56:54.945596 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945599 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.945602 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.945607 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.945610 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.945613 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.945616 | orchestrator | 2026-03-11 00:56:54.945619 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-11 00:56:54.945622 | orchestrator | Wednesday 11 March 2026 00:54:33 +0000 (0:00:00.703) 0:08:14.241 ******* 2026-03-11 00:56:54.945627 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.945630 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.945634 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.945637 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.945640 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.945643 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.945646 | orchestrator | 2026-03-11 00:56:54.945649 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-11 00:56:54.945652 | orchestrator | Wednesday 11 March 2026 00:54:34 +0000 (0:00:00.523) 0:08:14.765 ******* 2026-03-11 00:56:54.945655 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.945658 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.945661 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.945664 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.945667 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.945670 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.945673 | orchestrator | 2026-03-11 00:56:54.945676 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-11 00:56:54.945690 | orchestrator | Wednesday 11 March 2026 00:54:34 +0000 (0:00:00.649) 0:08:15.414 ******* 2026-03-11 00:56:54.945695 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.945700 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.945706 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.945711 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.945716 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.945719 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.945722 | orchestrator | 2026-03-11 00:56:54.945726 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-11 00:56:54.945729 | orchestrator | Wednesday 11 March 2026 00:54:35 +0000 (0:00:00.526) 0:08:15.941 ******* 2026-03-11 00:56:54.945732 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945735 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.945738 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.945741 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.945744 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.945747 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.945750 | orchestrator | 2026-03-11 00:56:54.945753 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-11 00:56:54.945756 | orchestrator | Wednesday 11 March 2026 00:54:36 +0000 (0:00:00.802) 0:08:16.743 ******* 2026-03-11 00:56:54.945760 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945763 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.945766 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.945769 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:54.945772 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:54.945775 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:54.945778 | orchestrator | 2026-03-11 00:56:54.945781 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-11 00:56:54.945784 | orchestrator | Wednesday 11 March 2026 00:54:36 +0000 (0:00:00.591) 0:08:17.335 ******* 2026-03-11 00:56:54.945790 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.945793 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.945796 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.945799 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.945802 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.945805 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.945808 | orchestrator | 2026-03-11 00:56:54.945811 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-11 00:56:54.945815 | orchestrator | Wednesday 11 March 2026 00:54:37 +0000 (0:00:00.803) 0:08:18.138 ******* 2026-03-11 00:56:54.945818 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.945821 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.945824 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.945829 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.945832 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.945835 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.945838 | orchestrator | 2026-03-11 00:56:54.945841 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-11 00:56:54.945845 | orchestrator | Wednesday 11 March 2026 00:54:38 +0000 (0:00:00.639) 0:08:18.777 ******* 2026-03-11 00:56:54.945848 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.945851 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.945854 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.945857 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.945860 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.945863 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.945866 | orchestrator | 2026-03-11 00:56:54.945869 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-11 00:56:54.945872 | orchestrator | Wednesday 11 March 2026 00:54:39 +0000 (0:00:01.150) 0:08:19.928 ******* 2026-03-11 00:56:54.945875 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:54.945878 | orchestrator | 2026-03-11 00:56:54.945881 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-11 00:56:54.945884 | orchestrator | Wednesday 11 March 2026 00:54:43 +0000 (0:00:04.404) 0:08:24.332 ******* 2026-03-11 00:56:54.945887 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:54.945890 | orchestrator | 2026-03-11 00:56:54.945893 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-11 00:56:54.945896 | orchestrator | Wednesday 11 March 2026 00:54:45 +0000 (0:00:01.766) 0:08:26.099 ******* 2026-03-11 00:56:54.945900 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.945903 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.945906 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.945909 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.945912 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.945915 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.945918 | orchestrator | 2026-03-11 00:56:54.945921 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-11 00:56:54.945924 | orchestrator | Wednesday 11 March 2026 00:54:47 +0000 (0:00:01.688) 0:08:27.787 ******* 2026-03-11 00:56:54.945929 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.945932 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.945936 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.945939 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.945942 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.945945 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.945948 | orchestrator | 2026-03-11 00:56:54.945951 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-11 00:56:54.945954 | orchestrator | Wednesday 11 March 2026 00:54:47 +0000 (0:00:00.852) 0:08:28.640 ******* 2026-03-11 00:56:54.945957 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.945961 | orchestrator | 2026-03-11 00:56:54.945964 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-11 00:56:54.945967 | orchestrator | Wednesday 11 March 2026 00:54:48 +0000 (0:00:00.996) 0:08:29.637 ******* 2026-03-11 00:56:54.945970 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.945973 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.945976 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.945979 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.945982 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.945985 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.945988 | orchestrator | 2026-03-11 00:56:54.945991 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-11 00:56:54.945994 | orchestrator | Wednesday 11 March 2026 00:54:50 +0000 (0:00:01.527) 0:08:31.164 ******* 2026-03-11 00:56:54.945999 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.946003 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.946006 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.946009 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.946039 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.946044 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.946047 | orchestrator | 2026-03-11 00:56:54.946050 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-11 00:56:54.946053 | orchestrator | Wednesday 11 March 2026 00:54:53 +0000 (0:00:03.229) 0:08:34.394 ******* 2026-03-11 00:56:54.946056 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:54.946059 | orchestrator | 2026-03-11 00:56:54.946062 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-11 00:56:54.946065 | orchestrator | Wednesday 11 March 2026 00:54:54 +0000 (0:00:01.041) 0:08:35.436 ******* 2026-03-11 00:56:54.946068 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.946072 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.946075 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.946078 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.946081 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.946084 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.946087 | orchestrator | 2026-03-11 00:56:54.946090 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-11 00:56:54.946093 | orchestrator | Wednesday 11 March 2026 00:54:55 +0000 (0:00:00.693) 0:08:36.129 ******* 2026-03-11 00:56:54.946099 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.946102 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.946105 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:54.946108 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.946111 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:54.946114 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:54.946117 | orchestrator | 2026-03-11 00:56:54.946120 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-11 00:56:54.946123 | orchestrator | Wednesday 11 March 2026 00:54:57 +0000 (0:00:02.091) 0:08:38.221 ******* 2026-03-11 00:56:54.946126 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.946130 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.946133 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.946136 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:54.946143 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:54.946147 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:54.946150 | orchestrator | 2026-03-11 00:56:54.946153 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-11 00:56:54.946156 | orchestrator | 2026-03-11 00:56:54.946159 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-11 00:56:54.946162 | orchestrator | Wednesday 11 March 2026 00:54:58 +0000 (0:00:00.852) 0:08:39.074 ******* 2026-03-11 00:56:54.946165 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.946168 | orchestrator | 2026-03-11 00:56:54.946171 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-11 00:56:54.946174 | orchestrator | Wednesday 11 March 2026 00:54:58 +0000 (0:00:00.472) 0:08:39.547 ******* 2026-03-11 00:56:54.946177 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.946180 | orchestrator | 2026-03-11 00:56:54.946184 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-11 00:56:54.946187 | orchestrator | Wednesday 11 March 2026 00:54:59 +0000 (0:00:00.717) 0:08:40.264 ******* 2026-03-11 00:56:54.946190 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.946193 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.946199 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.946202 | orchestrator | 2026-03-11 00:56:54.946205 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-11 00:56:54.946208 | orchestrator | Wednesday 11 March 2026 00:54:59 +0000 (0:00:00.293) 0:08:40.558 ******* 2026-03-11 00:56:54.946211 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.946215 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.946218 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.946221 | orchestrator | 2026-03-11 00:56:54.946224 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-11 00:56:54.946227 | orchestrator | Wednesday 11 March 2026 00:55:00 +0000 (0:00:00.646) 0:08:41.204 ******* 2026-03-11 00:56:54.946232 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.946235 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.946238 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.946241 | orchestrator | 2026-03-11 00:56:54.946245 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-11 00:56:54.946248 | orchestrator | Wednesday 11 March 2026 00:55:01 +0000 (0:00:00.959) 0:08:42.164 ******* 2026-03-11 00:56:54.946251 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.946254 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.946257 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.946260 | orchestrator | 2026-03-11 00:56:54.946263 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-11 00:56:54.946266 | orchestrator | Wednesday 11 March 2026 00:55:02 +0000 (0:00:00.657) 0:08:42.821 ******* 2026-03-11 00:56:54.946269 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.946272 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.946275 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.946278 | orchestrator | 2026-03-11 00:56:54.946281 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-11 00:56:54.946284 | orchestrator | Wednesday 11 March 2026 00:55:02 +0000 (0:00:00.290) 0:08:43.111 ******* 2026-03-11 00:56:54.946287 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.946290 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.946293 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.946297 | orchestrator | 2026-03-11 00:56:54.946300 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-11 00:56:54.946303 | orchestrator | Wednesday 11 March 2026 00:55:02 +0000 (0:00:00.297) 0:08:43.409 ******* 2026-03-11 00:56:54.946306 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.946309 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.946312 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.946315 | orchestrator | 2026-03-11 00:56:54.946318 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-11 00:56:54.946321 | orchestrator | Wednesday 11 March 2026 00:55:03 +0000 (0:00:00.569) 0:08:43.978 ******* 2026-03-11 00:56:54.946324 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.946327 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.946330 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.946333 | orchestrator | 2026-03-11 00:56:54.946336 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-11 00:56:54.946340 | orchestrator | Wednesday 11 March 2026 00:55:03 +0000 (0:00:00.659) 0:08:44.638 ******* 2026-03-11 00:56:54.946343 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.946346 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.946349 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.946352 | orchestrator | 2026-03-11 00:56:54.946355 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-11 00:56:54.946358 | orchestrator | Wednesday 11 March 2026 00:55:04 +0000 (0:00:00.637) 0:08:45.275 ******* 2026-03-11 00:56:54.946361 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.946364 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.946367 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.946370 | orchestrator | 2026-03-11 00:56:54.946373 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-11 00:56:54.946380 | orchestrator | Wednesday 11 March 2026 00:55:04 +0000 (0:00:00.272) 0:08:45.548 ******* 2026-03-11 00:56:54.946386 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.946389 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.946392 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.946395 | orchestrator | 2026-03-11 00:56:54.946398 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-11 00:56:54.946401 | orchestrator | Wednesday 11 March 2026 00:55:05 +0000 (0:00:00.570) 0:08:46.119 ******* 2026-03-11 00:56:54.946404 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.946407 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.946410 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.946414 | orchestrator | 2026-03-11 00:56:54.946417 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-11 00:56:54.946420 | orchestrator | Wednesday 11 March 2026 00:55:05 +0000 (0:00:00.319) 0:08:46.439 ******* 2026-03-11 00:56:54.946423 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.946426 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.946429 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.946432 | orchestrator | 2026-03-11 00:56:54.946435 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-11 00:56:54.946438 | orchestrator | Wednesday 11 March 2026 00:55:06 +0000 (0:00:00.320) 0:08:46.759 ******* 2026-03-11 00:56:54.946441 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.946444 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.946447 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.946450 | orchestrator | 2026-03-11 00:56:54.946453 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-11 00:56:54.946456 | orchestrator | Wednesday 11 March 2026 00:55:06 +0000 (0:00:00.329) 0:08:47.089 ******* 2026-03-11 00:56:54.946459 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.946463 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.946466 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.946469 | orchestrator | 2026-03-11 00:56:54.946472 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-11 00:56:54.946475 | orchestrator | Wednesday 11 March 2026 00:55:06 +0000 (0:00:00.538) 0:08:47.628 ******* 2026-03-11 00:56:54.946478 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.946481 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.946484 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.946487 | orchestrator | 2026-03-11 00:56:54.946490 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-11 00:56:54.946493 | orchestrator | Wednesday 11 March 2026 00:55:07 +0000 (0:00:00.302) 0:08:47.931 ******* 2026-03-11 00:56:54.946496 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.946499 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.946502 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.946505 | orchestrator | 2026-03-11 00:56:54.946509 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-11 00:56:54.946512 | orchestrator | Wednesday 11 March 2026 00:55:07 +0000 (0:00:00.338) 0:08:48.269 ******* 2026-03-11 00:56:54.946515 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.946518 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.946523 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.946526 | orchestrator | 2026-03-11 00:56:54.946529 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-11 00:56:54.946532 | orchestrator | Wednesday 11 March 2026 00:55:07 +0000 (0:00:00.307) 0:08:48.576 ******* 2026-03-11 00:56:54.946535 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.946538 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.946541 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.946544 | orchestrator | 2026-03-11 00:56:54.946548 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-11 00:56:54.946551 | orchestrator | Wednesday 11 March 2026 00:55:08 +0000 (0:00:00.791) 0:08:49.368 ******* 2026-03-11 00:56:54.946556 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.946560 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.946563 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-11 00:56:54.946566 | orchestrator | 2026-03-11 00:56:54.946569 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-11 00:56:54.946572 | orchestrator | Wednesday 11 March 2026 00:55:09 +0000 (0:00:00.442) 0:08:49.811 ******* 2026-03-11 00:56:54.946575 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:54.946578 | orchestrator | 2026-03-11 00:56:54.946581 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-11 00:56:54.946584 | orchestrator | Wednesday 11 March 2026 00:55:11 +0000 (0:00:02.130) 0:08:51.942 ******* 2026-03-11 00:56:54.946588 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-11 00:56:54.946592 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.946595 | orchestrator | 2026-03-11 00:56:54.946598 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-11 00:56:54.946601 | orchestrator | Wednesday 11 March 2026 00:55:11 +0000 (0:00:00.227) 0:08:52.170 ******* 2026-03-11 00:56:54.946605 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-11 00:56:54.946612 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-11 00:56:54.946615 | orchestrator | 2026-03-11 00:56:54.946618 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-11 00:56:54.946623 | orchestrator | Wednesday 11 March 2026 00:55:19 +0000 (0:00:07.570) 0:08:59.741 ******* 2026-03-11 00:56:54.946626 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:54.946629 | orchestrator | 2026-03-11 00:56:54.946632 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-11 00:56:54.946635 | orchestrator | Wednesday 11 March 2026 00:55:22 +0000 (0:00:03.697) 0:09:03.438 ******* 2026-03-11 00:56:54.946638 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.946642 | orchestrator | 2026-03-11 00:56:54.946645 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-11 00:56:54.946648 | orchestrator | Wednesday 11 March 2026 00:55:23 +0000 (0:00:00.594) 0:09:04.032 ******* 2026-03-11 00:56:54.946651 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-11 00:56:54.946654 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-11 00:56:54.946657 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-11 00:56:54.946660 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-11 00:56:54.946663 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-11 00:56:54.946666 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-11 00:56:54.946669 | orchestrator | 2026-03-11 00:56:54.946672 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-11 00:56:54.946675 | orchestrator | Wednesday 11 March 2026 00:55:24 +0000 (0:00:01.037) 0:09:05.070 ******* 2026-03-11 00:56:54.946701 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:54.946708 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-11 00:56:54.946711 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-11 00:56:54.946714 | orchestrator | 2026-03-11 00:56:54.946717 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-11 00:56:54.946720 | orchestrator | Wednesday 11 March 2026 00:55:26 +0000 (0:00:02.303) 0:09:07.373 ******* 2026-03-11 00:56:54.946723 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-11 00:56:54.946727 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-11 00:56:54.946730 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.946733 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-11 00:56:54.946736 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-11 00:56:54.946739 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.946742 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-11 00:56:54.946745 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-11 00:56:54.946750 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.946754 | orchestrator | 2026-03-11 00:56:54.946757 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-11 00:56:54.946760 | orchestrator | Wednesday 11 March 2026 00:55:28 +0000 (0:00:01.536) 0:09:08.910 ******* 2026-03-11 00:56:54.946763 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.946766 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.946769 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.946772 | orchestrator | 2026-03-11 00:56:54.946775 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-11 00:56:54.946778 | orchestrator | Wednesday 11 March 2026 00:55:31 +0000 (0:00:02.832) 0:09:11.743 ******* 2026-03-11 00:56:54.946781 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.946784 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.946787 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.946790 | orchestrator | 2026-03-11 00:56:54.946793 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-11 00:56:54.946797 | orchestrator | Wednesday 11 March 2026 00:55:31 +0000 (0:00:00.362) 0:09:12.105 ******* 2026-03-11 00:56:54.946800 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.946803 | orchestrator | 2026-03-11 00:56:54.946806 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-11 00:56:54.946809 | orchestrator | Wednesday 11 March 2026 00:55:32 +0000 (0:00:00.600) 0:09:12.706 ******* 2026-03-11 00:56:54.946812 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.946815 | orchestrator | 2026-03-11 00:56:54.946818 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-11 00:56:54.946821 | orchestrator | Wednesday 11 March 2026 00:55:32 +0000 (0:00:00.644) 0:09:13.350 ******* 2026-03-11 00:56:54.946824 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.946827 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.946830 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.946833 | orchestrator | 2026-03-11 00:56:54.946836 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-11 00:56:54.946839 | orchestrator | Wednesday 11 March 2026 00:55:34 +0000 (0:00:01.385) 0:09:14.736 ******* 2026-03-11 00:56:54.946842 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.946846 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.946849 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.946852 | orchestrator | 2026-03-11 00:56:54.946855 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-11 00:56:54.946858 | orchestrator | Wednesday 11 March 2026 00:55:35 +0000 (0:00:01.106) 0:09:15.843 ******* 2026-03-11 00:56:54.946861 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.946864 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.946870 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.946873 | orchestrator | 2026-03-11 00:56:54.946876 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-11 00:56:54.946879 | orchestrator | Wednesday 11 March 2026 00:55:37 +0000 (0:00:01.858) 0:09:17.701 ******* 2026-03-11 00:56:54.946882 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.946888 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.946891 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.946894 | orchestrator | 2026-03-11 00:56:54.946897 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-11 00:56:54.946900 | orchestrator | Wednesday 11 March 2026 00:55:39 +0000 (0:00:02.151) 0:09:19.853 ******* 2026-03-11 00:56:54.946903 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.946906 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.946909 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.946912 | orchestrator | 2026-03-11 00:56:54.946916 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-11 00:56:54.946919 | orchestrator | Wednesday 11 March 2026 00:55:40 +0000 (0:00:01.495) 0:09:21.348 ******* 2026-03-11 00:56:54.946922 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.946925 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.946928 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.946931 | orchestrator | 2026-03-11 00:56:54.946934 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-11 00:56:54.946937 | orchestrator | Wednesday 11 March 2026 00:55:41 +0000 (0:00:00.880) 0:09:22.228 ******* 2026-03-11 00:56:54.946940 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.946943 | orchestrator | 2026-03-11 00:56:54.946946 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-11 00:56:54.946949 | orchestrator | Wednesday 11 March 2026 00:55:42 +0000 (0:00:00.710) 0:09:22.939 ******* 2026-03-11 00:56:54.946952 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.946955 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.946959 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.946962 | orchestrator | 2026-03-11 00:56:54.946965 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-11 00:56:54.946968 | orchestrator | Wednesday 11 March 2026 00:55:42 +0000 (0:00:00.328) 0:09:23.267 ******* 2026-03-11 00:56:54.946971 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.946974 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.946977 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.946980 | orchestrator | 2026-03-11 00:56:54.946983 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-11 00:56:54.946986 | orchestrator | Wednesday 11 March 2026 00:55:43 +0000 (0:00:01.090) 0:09:24.358 ******* 2026-03-11 00:56:54.946989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:54.946992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:54.946995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:54.946998 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947001 | orchestrator | 2026-03-11 00:56:54.947004 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-11 00:56:54.947009 | orchestrator | Wednesday 11 March 2026 00:55:44 +0000 (0:00:01.135) 0:09:25.493 ******* 2026-03-11 00:56:54.947013 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.947016 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.947019 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.947022 | orchestrator | 2026-03-11 00:56:54.947025 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-11 00:56:54.947028 | orchestrator | 2026-03-11 00:56:54.947031 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-11 00:56:54.947034 | orchestrator | Wednesday 11 March 2026 00:55:45 +0000 (0:00:01.062) 0:09:26.555 ******* 2026-03-11 00:56:54.947041 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.947044 | orchestrator | 2026-03-11 00:56:54.947047 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-11 00:56:54.947050 | orchestrator | Wednesday 11 March 2026 00:55:46 +0000 (0:00:00.569) 0:09:27.125 ******* 2026-03-11 00:56:54.947053 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.947057 | orchestrator | 2026-03-11 00:56:54.947060 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-11 00:56:54.947063 | orchestrator | Wednesday 11 March 2026 00:55:47 +0000 (0:00:00.703) 0:09:27.829 ******* 2026-03-11 00:56:54.947066 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947069 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.947072 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.947075 | orchestrator | 2026-03-11 00:56:54.947078 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-11 00:56:54.947081 | orchestrator | Wednesday 11 March 2026 00:55:47 +0000 (0:00:00.334) 0:09:28.163 ******* 2026-03-11 00:56:54.947084 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.947087 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.947091 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.947094 | orchestrator | 2026-03-11 00:56:54.947097 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-11 00:56:54.947100 | orchestrator | Wednesday 11 March 2026 00:55:48 +0000 (0:00:00.635) 0:09:28.798 ******* 2026-03-11 00:56:54.947103 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.947106 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.947109 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.947112 | orchestrator | 2026-03-11 00:56:54.947115 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-11 00:56:54.947118 | orchestrator | Wednesday 11 March 2026 00:55:48 +0000 (0:00:00.667) 0:09:29.466 ******* 2026-03-11 00:56:54.947121 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.947125 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.947128 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.947131 | orchestrator | 2026-03-11 00:56:54.947134 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-11 00:56:54.947137 | orchestrator | Wednesday 11 March 2026 00:55:49 +0000 (0:00:00.946) 0:09:30.412 ******* 2026-03-11 00:56:54.947140 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947143 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.947146 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.947149 | orchestrator | 2026-03-11 00:56:54.947155 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-11 00:56:54.947158 | orchestrator | Wednesday 11 March 2026 00:55:50 +0000 (0:00:00.304) 0:09:30.717 ******* 2026-03-11 00:56:54.947161 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947164 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.947167 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.947170 | orchestrator | 2026-03-11 00:56:54.947173 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-11 00:56:54.947177 | orchestrator | Wednesday 11 March 2026 00:55:50 +0000 (0:00:00.308) 0:09:31.025 ******* 2026-03-11 00:56:54.947180 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947183 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.947186 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.947189 | orchestrator | 2026-03-11 00:56:54.947192 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-11 00:56:54.947195 | orchestrator | Wednesday 11 March 2026 00:55:50 +0000 (0:00:00.294) 0:09:31.319 ******* 2026-03-11 00:56:54.947198 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.947201 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.947204 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.947210 | orchestrator | 2026-03-11 00:56:54.947213 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-11 00:56:54.947216 | orchestrator | Wednesday 11 March 2026 00:55:51 +0000 (0:00:01.118) 0:09:32.438 ******* 2026-03-11 00:56:54.947219 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.947222 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.947225 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.947229 | orchestrator | 2026-03-11 00:56:54.947232 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-11 00:56:54.947235 | orchestrator | Wednesday 11 March 2026 00:55:52 +0000 (0:00:00.814) 0:09:33.253 ******* 2026-03-11 00:56:54.947238 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947241 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.947244 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.947247 | orchestrator | 2026-03-11 00:56:54.947250 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-11 00:56:54.947253 | orchestrator | Wednesday 11 March 2026 00:55:52 +0000 (0:00:00.287) 0:09:33.540 ******* 2026-03-11 00:56:54.947256 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947259 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.947262 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.947265 | orchestrator | 2026-03-11 00:56:54.947268 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-11 00:56:54.947271 | orchestrator | Wednesday 11 March 2026 00:55:53 +0000 (0:00:00.308) 0:09:33.849 ******* 2026-03-11 00:56:54.947274 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.947277 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.947281 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.947284 | orchestrator | 2026-03-11 00:56:54.947288 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-11 00:56:54.947292 | orchestrator | Wednesday 11 March 2026 00:55:53 +0000 (0:00:00.556) 0:09:34.405 ******* 2026-03-11 00:56:54.947295 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.947298 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.947301 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.947304 | orchestrator | 2026-03-11 00:56:54.947307 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-11 00:56:54.947310 | orchestrator | Wednesday 11 March 2026 00:55:54 +0000 (0:00:00.316) 0:09:34.722 ******* 2026-03-11 00:56:54.947314 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.947317 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.947320 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.947323 | orchestrator | 2026-03-11 00:56:54.947326 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-11 00:56:54.947329 | orchestrator | Wednesday 11 March 2026 00:55:54 +0000 (0:00:00.319) 0:09:35.041 ******* 2026-03-11 00:56:54.947332 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947335 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.947338 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.947341 | orchestrator | 2026-03-11 00:56:54.947344 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-11 00:56:54.947347 | orchestrator | Wednesday 11 March 2026 00:55:54 +0000 (0:00:00.290) 0:09:35.331 ******* 2026-03-11 00:56:54.947350 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947353 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.947357 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.947360 | orchestrator | 2026-03-11 00:56:54.947363 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-11 00:56:54.947366 | orchestrator | Wednesday 11 March 2026 00:55:55 +0000 (0:00:00.555) 0:09:35.887 ******* 2026-03-11 00:56:54.947369 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947372 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.947375 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.947378 | orchestrator | 2026-03-11 00:56:54.947381 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-11 00:56:54.947386 | orchestrator | Wednesday 11 March 2026 00:55:55 +0000 (0:00:00.320) 0:09:36.207 ******* 2026-03-11 00:56:54.947389 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.947392 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.947395 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.947398 | orchestrator | 2026-03-11 00:56:54.947402 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-11 00:56:54.947405 | orchestrator | Wednesday 11 March 2026 00:55:55 +0000 (0:00:00.307) 0:09:36.515 ******* 2026-03-11 00:56:54.947408 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.947411 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.947414 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.947417 | orchestrator | 2026-03-11 00:56:54.947420 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-11 00:56:54.947423 | orchestrator | Wednesday 11 March 2026 00:55:56 +0000 (0:00:00.749) 0:09:37.264 ******* 2026-03-11 00:56:54.947426 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.947429 | orchestrator | 2026-03-11 00:56:54.947432 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-11 00:56:54.947438 | orchestrator | Wednesday 11 March 2026 00:55:57 +0000 (0:00:00.499) 0:09:37.764 ******* 2026-03-11 00:56:54.947441 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:54.947444 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-11 00:56:54.947447 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-11 00:56:54.947450 | orchestrator | 2026-03-11 00:56:54.947453 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-11 00:56:54.947456 | orchestrator | Wednesday 11 March 2026 00:55:58 +0000 (0:00:01.824) 0:09:39.589 ******* 2026-03-11 00:56:54.947459 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-11 00:56:54.947463 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-11 00:56:54.947466 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.947469 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-11 00:56:54.947472 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-11 00:56:54.947475 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.947478 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-11 00:56:54.947481 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-11 00:56:54.947484 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.947487 | orchestrator | 2026-03-11 00:56:54.947490 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-11 00:56:54.947493 | orchestrator | Wednesday 11 March 2026 00:56:00 +0000 (0:00:01.223) 0:09:40.812 ******* 2026-03-11 00:56:54.947496 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947499 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.947502 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.947505 | orchestrator | 2026-03-11 00:56:54.947508 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-11 00:56:54.947512 | orchestrator | Wednesday 11 March 2026 00:56:00 +0000 (0:00:00.299) 0:09:41.112 ******* 2026-03-11 00:56:54.947515 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.947518 | orchestrator | 2026-03-11 00:56:54.947521 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-11 00:56:54.947524 | orchestrator | Wednesday 11 March 2026 00:56:00 +0000 (0:00:00.518) 0:09:41.630 ******* 2026-03-11 00:56:54.947527 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:54.947532 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:54.947537 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:54.947540 | orchestrator | 2026-03-11 00:56:54.947544 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-11 00:56:54.947547 | orchestrator | Wednesday 11 March 2026 00:56:02 +0000 (0:00:01.206) 0:09:42.837 ******* 2026-03-11 00:56:54.947550 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:54.947553 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-11 00:56:54.947556 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:54.947559 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-11 00:56:54.947562 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:54.947565 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-11 00:56:54.947568 | orchestrator | 2026-03-11 00:56:54.947572 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-11 00:56:54.947575 | orchestrator | Wednesday 11 March 2026 00:56:06 +0000 (0:00:04.574) 0:09:47.412 ******* 2026-03-11 00:56:54.947578 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:54.947581 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-11 00:56:54.947584 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:54.947587 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-11 00:56:54.947590 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:54.947593 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-11 00:56:54.947596 | orchestrator | 2026-03-11 00:56:54.947599 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-11 00:56:54.947602 | orchestrator | Wednesday 11 March 2026 00:56:09 +0000 (0:00:02.269) 0:09:49.682 ******* 2026-03-11 00:56:54.947605 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-11 00:56:54.947608 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.947611 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-11 00:56:54.947614 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.947618 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-11 00:56:54.947621 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.947624 | orchestrator | 2026-03-11 00:56:54.947627 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-11 00:56:54.947632 | orchestrator | Wednesday 11 March 2026 00:56:10 +0000 (0:00:01.144) 0:09:50.826 ******* 2026-03-11 00:56:54.947635 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-11 00:56:54.947638 | orchestrator | 2026-03-11 00:56:54.947641 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-11 00:56:54.947644 | orchestrator | Wednesday 11 March 2026 00:56:10 +0000 (0:00:00.220) 0:09:51.046 ******* 2026-03-11 00:56:54.947647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:54.947651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:54.947654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:54.947657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:54.947689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:54.947692 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947695 | orchestrator | 2026-03-11 00:56:54.947698 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-11 00:56:54.947702 | orchestrator | Wednesday 11 March 2026 00:56:11 +0000 (0:00:01.038) 0:09:52.085 ******* 2026-03-11 00:56:54.947705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:54.947708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:54.947711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:54.947714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:54.947717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:54.947720 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947723 | orchestrator | 2026-03-11 00:56:54.947728 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-11 00:56:54.947731 | orchestrator | Wednesday 11 March 2026 00:56:12 +0000 (0:00:00.575) 0:09:52.661 ******* 2026-03-11 00:56:54.947735 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-11 00:56:54.947738 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-11 00:56:54.947741 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-11 00:56:54.947744 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-11 00:56:54.947747 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-11 00:56:54.947750 | orchestrator | 2026-03-11 00:56:54.947753 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-11 00:56:54.947756 | orchestrator | Wednesday 11 March 2026 00:56:40 +0000 (0:00:28.183) 0:10:20.844 ******* 2026-03-11 00:56:54.947759 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947762 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.947766 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.947769 | orchestrator | 2026-03-11 00:56:54.947772 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-11 00:56:54.947775 | orchestrator | Wednesday 11 March 2026 00:56:40 +0000 (0:00:00.298) 0:10:21.143 ******* 2026-03-11 00:56:54.947778 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947781 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.947784 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.947787 | orchestrator | 2026-03-11 00:56:54.947790 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-11 00:56:54.947793 | orchestrator | Wednesday 11 March 2026 00:56:40 +0000 (0:00:00.328) 0:10:21.472 ******* 2026-03-11 00:56:54.947796 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.947799 | orchestrator | 2026-03-11 00:56:54.947802 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-11 00:56:54.947809 | orchestrator | Wednesday 11 March 2026 00:56:41 +0000 (0:00:00.768) 0:10:22.240 ******* 2026-03-11 00:56:54.947812 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.947815 | orchestrator | 2026-03-11 00:56:54.947821 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-11 00:56:54.947824 | orchestrator | Wednesday 11 March 2026 00:56:42 +0000 (0:00:00.517) 0:10:22.758 ******* 2026-03-11 00:56:54.947827 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.947830 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.947833 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.947836 | orchestrator | 2026-03-11 00:56:54.947839 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-11 00:56:54.947842 | orchestrator | Wednesday 11 March 2026 00:56:43 +0000 (0:00:01.402) 0:10:24.161 ******* 2026-03-11 00:56:54.947845 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.947848 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.947851 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.947855 | orchestrator | 2026-03-11 00:56:54.947858 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-11 00:56:54.947861 | orchestrator | Wednesday 11 March 2026 00:56:45 +0000 (0:00:01.603) 0:10:25.764 ******* 2026-03-11 00:56:54.947864 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:54.947867 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:54.947870 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:54.947873 | orchestrator | 2026-03-11 00:56:54.947876 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-11 00:56:54.947879 | orchestrator | Wednesday 11 March 2026 00:56:47 +0000 (0:00:02.196) 0:10:27.960 ******* 2026-03-11 00:56:54.947882 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:54.947885 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:54.947889 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:54.947892 | orchestrator | 2026-03-11 00:56:54.947895 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-11 00:56:54.947898 | orchestrator | Wednesday 11 March 2026 00:56:50 +0000 (0:00:02.951) 0:10:30.912 ******* 2026-03-11 00:56:54.947901 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947904 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.947907 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.947910 | orchestrator | 2026-03-11 00:56:54.947913 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-11 00:56:54.947916 | orchestrator | Wednesday 11 March 2026 00:56:50 +0000 (0:00:00.307) 0:10:31.220 ******* 2026-03-11 00:56:54.947919 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:54.947922 | orchestrator | 2026-03-11 00:56:54.947927 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-11 00:56:54.947930 | orchestrator | Wednesday 11 March 2026 00:56:51 +0000 (0:00:00.449) 0:10:31.669 ******* 2026-03-11 00:56:54.947933 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.947936 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.947940 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.947943 | orchestrator | 2026-03-11 00:56:54.947946 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-11 00:56:54.947949 | orchestrator | Wednesday 11 March 2026 00:56:51 +0000 (0:00:00.444) 0:10:32.113 ******* 2026-03-11 00:56:54.947952 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947955 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:54.947960 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:54.947963 | orchestrator | 2026-03-11 00:56:54.947966 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-11 00:56:54.947969 | orchestrator | Wednesday 11 March 2026 00:56:51 +0000 (0:00:00.293) 0:10:32.407 ******* 2026-03-11 00:56:54.947972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:54.947975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:54.947978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:54.947982 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:54.947985 | orchestrator | 2026-03-11 00:56:54.947988 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-11 00:56:54.947991 | orchestrator | Wednesday 11 March 2026 00:56:52 +0000 (0:00:00.590) 0:10:32.997 ******* 2026-03-11 00:56:54.947994 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:54.947997 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:54.948000 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:54.948003 | orchestrator | 2026-03-11 00:56:54.948006 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:56:54.948009 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-11 00:56:54.948012 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-11 00:56:54.948016 | orchestrator | testbed-node-2 : ok=134  changed=34  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-11 00:56:54.948019 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-11 00:56:54.948022 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-11 00:56:54.948027 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-11 00:56:54.948030 | orchestrator | 2026-03-11 00:56:54.948034 | orchestrator | 2026-03-11 00:56:54.948037 | orchestrator | 2026-03-11 00:56:54.948040 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:56:54.948043 | orchestrator | Wednesday 11 March 2026 00:56:52 +0000 (0:00:00.224) 0:10:33.222 ******* 2026-03-11 00:56:54.948046 | orchestrator | =============================================================================== 2026-03-11 00:56:54.948049 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 52.04s 2026-03-11 00:56:54.948052 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 47.03s 2026-03-11 00:56:54.948055 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 35.89s 2026-03-11 00:56:54.948058 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 28.18s 2026-03-11 00:56:54.948061 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 13.18s 2026-03-11 00:56:54.948064 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.12s 2026-03-11 00:56:54.948067 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.40s 2026-03-11 00:56:54.948070 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.04s 2026-03-11 00:56:54.948073 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.57s 2026-03-11 00:56:54.948076 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.32s 2026-03-11 00:56:54.948079 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.49s 2026-03-11 00:56:54.948083 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.68s 2026-03-11 00:56:54.948088 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.57s 2026-03-11 00:56:54.948091 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.40s 2026-03-11 00:56:54.948094 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.70s 2026-03-11 00:56:54.948097 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.68s 2026-03-11 00:56:54.948100 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.58s 2026-03-11 00:56:54.948103 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.48s 2026-03-11 00:56:54.948106 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.37s 2026-03-11 00:56:54.948109 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.24s 2026-03-11 00:56:54.948114 | orchestrator | 2026-03-11 00:56:54 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:54.948128 | orchestrator | 2026-03-11 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:57.983087 | orchestrator | 2026-03-11 00:56:57 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:56:57.984394 | orchestrator | 2026-03-11 00:56:57 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:56:57.985745 | orchestrator | 2026-03-11 00:56:57 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:56:57.985794 | orchestrator | 2026-03-11 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:01.021717 | orchestrator | 2026-03-11 00:57:01 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:01.024283 | orchestrator | 2026-03-11 00:57:01 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:01.025382 | orchestrator | 2026-03-11 00:57:01 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:57:01.025775 | orchestrator | 2026-03-11 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:04.069067 | orchestrator | 2026-03-11 00:57:04 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:04.072254 | orchestrator | 2026-03-11 00:57:04 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:04.074499 | orchestrator | 2026-03-11 00:57:04 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:57:04.074566 | orchestrator | 2026-03-11 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:07.123183 | orchestrator | 2026-03-11 00:57:07 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:07.125264 | orchestrator | 2026-03-11 00:57:07 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:07.126713 | orchestrator | 2026-03-11 00:57:07 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:57:07.126750 | orchestrator | 2026-03-11 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:10.171172 | orchestrator | 2026-03-11 00:57:10 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:10.173507 | orchestrator | 2026-03-11 00:57:10 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:10.178777 | orchestrator | 2026-03-11 00:57:10 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:57:10.178829 | orchestrator | 2026-03-11 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:13.209480 | orchestrator | 2026-03-11 00:57:13 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:13.210722 | orchestrator | 2026-03-11 00:57:13 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:13.211898 | orchestrator | 2026-03-11 00:57:13 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:57:13.211966 | orchestrator | 2026-03-11 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:16.248406 | orchestrator | 2026-03-11 00:57:16 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:16.248498 | orchestrator | 2026-03-11 00:57:16 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:16.251014 | orchestrator | 2026-03-11 00:57:16 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:57:16.251066 | orchestrator | 2026-03-11 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:19.300267 | orchestrator | 2026-03-11 00:57:19 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:19.302152 | orchestrator | 2026-03-11 00:57:19 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:19.303967 | orchestrator | 2026-03-11 00:57:19 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:57:19.304003 | orchestrator | 2026-03-11 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:22.344299 | orchestrator | 2026-03-11 00:57:22 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:22.344813 | orchestrator | 2026-03-11 00:57:22 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:22.349527 | orchestrator | 2026-03-11 00:57:22 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:57:22.349579 | orchestrator | 2026-03-11 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:25.398064 | orchestrator | 2026-03-11 00:57:25 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:25.399439 | orchestrator | 2026-03-11 00:57:25 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:25.401151 | orchestrator | 2026-03-11 00:57:25 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:57:25.401203 | orchestrator | 2026-03-11 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:28.455184 | orchestrator | 2026-03-11 00:57:28 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:28.457613 | orchestrator | 2026-03-11 00:57:28 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:28.460477 | orchestrator | 2026-03-11 00:57:28 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state STARTED 2026-03-11 00:57:28.461153 | orchestrator | 2026-03-11 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:31.506909 | orchestrator | 2026-03-11 00:57:31 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:31.508234 | orchestrator | 2026-03-11 00:57:31 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:31.511013 | orchestrator | 2026-03-11 00:57:31 | INFO  | Task 0346b9f3-7854-4edd-9e67-2f03292e2e57 is in state SUCCESS 2026-03-11 00:57:31.512364 | orchestrator | 2026-03-11 00:57:31.512397 | orchestrator | 2026-03-11 00:57:31.512403 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:57:31.512408 | orchestrator | 2026-03-11 00:57:31.512412 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:57:31.512433 | orchestrator | Wednesday 11 March 2026 00:55:06 +0000 (0:00:00.265) 0:00:00.265 ******* 2026-03-11 00:57:31.512437 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:31.512443 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:31.512446 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:31.512451 | orchestrator | 2026-03-11 00:57:31.512455 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:57:31.512459 | orchestrator | Wednesday 11 March 2026 00:55:06 +0000 (0:00:00.277) 0:00:00.543 ******* 2026-03-11 00:57:31.512463 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-11 00:57:31.512468 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-11 00:57:31.512472 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-11 00:57:31.512476 | orchestrator | 2026-03-11 00:57:31.512480 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-11 00:57:31.512483 | orchestrator | 2026-03-11 00:57:31.512487 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-11 00:57:31.512491 | orchestrator | Wednesday 11 March 2026 00:55:07 +0000 (0:00:00.448) 0:00:00.991 ******* 2026-03-11 00:57:31.512497 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:57:31.512503 | orchestrator | 2026-03-11 00:57:31.512509 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-11 00:57:31.512515 | orchestrator | Wednesday 11 March 2026 00:55:07 +0000 (0:00:00.474) 0:00:01.465 ******* 2026-03-11 00:57:31.512523 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-11 00:57:31.512544 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-11 00:57:31.512555 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-11 00:57:31.512561 | orchestrator | 2026-03-11 00:57:31.512567 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-11 00:57:31.512572 | orchestrator | Wednesday 11 March 2026 00:55:08 +0000 (0:00:00.640) 0:00:02.106 ******* 2026-03-11 00:57:31.512581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:31.512604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:31.512622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:31.512638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:31.512664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:31.512676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:31.512688 | orchestrator | 2026-03-11 00:57:31.512694 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-11 00:57:31.512700 | orchestrator | Wednesday 11 March 2026 00:55:10 +0000 (0:00:01.571) 0:00:03.677 ******* 2026-03-11 00:57:31.512707 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:57:31.512713 | orchestrator | 2026-03-11 00:57:31.512719 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-11 00:57:31.512725 | orchestrator | Wednesday 11 March 2026 00:55:10 +0000 (0:00:00.518) 0:00:04.196 ******* 2026-03-11 00:57:31.512737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:31.512744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:31.512750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:31.512761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:31.512779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:31.512789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:31.512799 | orchestrator | 2026-03-11 00:57:31.512808 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-11 00:57:31.512814 | orchestrator | Wednesday 11 March 2026 00:55:13 +0000 (0:00:02.886) 0:00:07.083 ******* 2026-03-11 00:57:31.512821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:57:31.512832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:57:31.512845 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:31.512858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:57:31.512863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:57:31.512867 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:31.512871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:57:31.512885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:57:31.512894 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:31.512898 | orchestrator | 2026-03-11 00:57:31.512902 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-11 00:57:31.512906 | orchestrator | Wednesday 11 March 2026 00:55:14 +0000 (0:00:01.177) 0:00:08.260 ******* 2026-03-11 00:57:31.512913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:57:31.512917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:57:31.512921 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:31.512925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:57:31.512932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:57:31.512940 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:31.512947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:57:31.512951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:57:31.512955 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:31.512959 | orchestrator | 2026-03-11 00:57:31.512964 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-11 00:57:31.512968 | orchestrator | Wednesday 11 March 2026 00:55:15 +0000 (0:00:01.067) 0:00:09.328 ******* 2026-03-11 00:57:31.512972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:31.512986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:31.512991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:31.512999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:31.513004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:31.513012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:31.513021 | orchestrator | 2026-03-11 00:57:31.513025 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-11 00:57:31.513029 | orchestrator | Wednesday 11 March 2026 00:55:17 +0000 (0:00:02.206) 0:00:11.535 ******* 2026-03-11 00:57:31.513034 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:31.513038 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:31.513043 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:31.513047 | orchestrator | 2026-03-11 00:57:31.513051 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-11 00:57:31.513056 | orchestrator | Wednesday 11 March 2026 00:55:20 +0000 (0:00:02.741) 0:00:14.277 ******* 2026-03-11 00:57:31.513060 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:31.513064 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:31.513069 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:31.513073 | orchestrator | 2026-03-11 00:57:31.513077 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-11 00:57:31.513082 | orchestrator | Wednesday 11 March 2026 00:55:22 +0000 (0:00:01.799) 0:00:16.076 ******* 2026-03-11 00:57:31.513092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:31.513097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:31.513102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:31.513113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:31.513122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:31.513127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:31.513132 | orchestrator | 2026-03-11 00:57:31.513136 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-11 00:57:31.513144 | orchestrator | Wednesday 11 March 2026 00:55:24 +0000 (0:00:02.189) 0:00:18.265 ******* 2026-03-11 00:57:31.513148 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:31.513153 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:31.513157 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:31.513161 | orchestrator | 2026-03-11 00:57:31.513166 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-11 00:57:31.513170 | orchestrator | Wednesday 11 March 2026 00:55:25 +0000 (0:00:00.305) 0:00:18.571 ******* 2026-03-11 00:57:31.513174 | orchestrator | 2026-03-11 00:57:31.513179 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-11 00:57:31.513183 | orchestrator | Wednesday 11 March 2026 00:55:25 +0000 (0:00:00.065) 0:00:18.636 ******* 2026-03-11 00:57:31.513187 | orchestrator | 2026-03-11 00:57:31.513192 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-11 00:57:31.513196 | orchestrator | Wednesday 11 March 2026 00:55:25 +0000 (0:00:00.067) 0:00:18.703 ******* 2026-03-11 00:57:31.513201 | orchestrator | 2026-03-11 00:57:31.513205 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-11 00:57:31.513210 | orchestrator | Wednesday 11 March 2026 00:55:25 +0000 (0:00:00.065) 0:00:18.768 ******* 2026-03-11 00:57:31.513214 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:31.513218 | orchestrator | 2026-03-11 00:57:31.513223 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-11 00:57:31.513227 | orchestrator | Wednesday 11 March 2026 00:55:25 +0000 (0:00:00.230) 0:00:18.999 ******* 2026-03-11 00:57:31.513231 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:31.513236 | orchestrator | 2026-03-11 00:57:31.513240 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-11 00:57:31.513244 | orchestrator | Wednesday 11 March 2026 00:55:26 +0000 (0:00:00.638) 0:00:19.637 ******* 2026-03-11 00:57:31.513251 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:31.513256 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:31.513260 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:31.513265 | orchestrator | 2026-03-11 00:57:31.513269 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-11 00:57:31.513273 | orchestrator | Wednesday 11 March 2026 00:56:15 +0000 (0:00:49.283) 0:01:08.920 ******* 2026-03-11 00:57:31.513277 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:31.513282 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:31.513286 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:31.513290 | orchestrator | 2026-03-11 00:57:31.513294 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-11 00:57:31.513299 | orchestrator | Wednesday 11 March 2026 00:57:19 +0000 (0:01:04.155) 0:02:13.075 ******* 2026-03-11 00:57:31.513303 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:57:31.513307 | orchestrator | 2026-03-11 00:57:31.513312 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-11 00:57:31.513316 | orchestrator | Wednesday 11 March 2026 00:57:20 +0000 (0:00:00.727) 0:02:13.803 ******* 2026-03-11 00:57:31.513321 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:31.513325 | orchestrator | 2026-03-11 00:57:31.513330 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-11 00:57:31.513334 | orchestrator | Wednesday 11 March 2026 00:57:22 +0000 (0:00:02.219) 0:02:16.023 ******* 2026-03-11 00:57:31.513339 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:31.513343 | orchestrator | 2026-03-11 00:57:31.513347 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-11 00:57:31.513352 | orchestrator | Wednesday 11 March 2026 00:57:24 +0000 (0:00:02.133) 0:02:18.157 ******* 2026-03-11 00:57:31.513359 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:31.513365 | orchestrator | 2026-03-11 00:57:31.513370 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-11 00:57:31.513387 | orchestrator | Wednesday 11 March 2026 00:57:27 +0000 (0:00:03.018) 0:02:21.176 ******* 2026-03-11 00:57:31.513395 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:31.513401 | orchestrator | 2026-03-11 00:57:31.513411 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:57:31.513418 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 00:57:31.513426 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-11 00:57:31.513432 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-11 00:57:31.513438 | orchestrator | 2026-03-11 00:57:31.513443 | orchestrator | 2026-03-11 00:57:31.513450 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:57:31.513455 | orchestrator | Wednesday 11 March 2026 00:57:30 +0000 (0:00:02.958) 0:02:24.135 ******* 2026-03-11 00:57:31.513460 | orchestrator | =============================================================================== 2026-03-11 00:57:31.513466 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 64.16s 2026-03-11 00:57:31.513473 | orchestrator | opensearch : Restart opensearch container ------------------------------ 49.28s 2026-03-11 00:57:31.513479 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.02s 2026-03-11 00:57:31.513485 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.96s 2026-03-11 00:57:31.513491 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.89s 2026-03-11 00:57:31.513497 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.74s 2026-03-11 00:57:31.513503 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.22s 2026-03-11 00:57:31.513509 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.21s 2026-03-11 00:57:31.513515 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.19s 2026-03-11 00:57:31.513522 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.13s 2026-03-11 00:57:31.513528 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.80s 2026-03-11 00:57:31.513535 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.57s 2026-03-11 00:57:31.513541 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.18s 2026-03-11 00:57:31.513545 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.07s 2026-03-11 00:57:31.513549 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.73s 2026-03-11 00:57:31.513553 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.64s 2026-03-11 00:57:31.513557 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.64s 2026-03-11 00:57:31.513560 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-03-11 00:57:31.513564 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2026-03-11 00:57:31.513568 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-03-11 00:57:31.513572 | orchestrator | 2026-03-11 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:34.546926 | orchestrator | 2026-03-11 00:57:34 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:34.548627 | orchestrator | 2026-03-11 00:57:34 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:34.548701 | orchestrator | 2026-03-11 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:37.584989 | orchestrator | 2026-03-11 00:57:37 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:37.585881 | orchestrator | 2026-03-11 00:57:37 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:37.585960 | orchestrator | 2026-03-11 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:40.635565 | orchestrator | 2026-03-11 00:57:40 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:40.638705 | orchestrator | 2026-03-11 00:57:40 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:40.638778 | orchestrator | 2026-03-11 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:43.676014 | orchestrator | 2026-03-11 00:57:43 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:43.677086 | orchestrator | 2026-03-11 00:57:43 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:43.677121 | orchestrator | 2026-03-11 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:46.714853 | orchestrator | 2026-03-11 00:57:46 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:46.716159 | orchestrator | 2026-03-11 00:57:46 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:46.716222 | orchestrator | 2026-03-11 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:49.756841 | orchestrator | 2026-03-11 00:57:49 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:49.757889 | orchestrator | 2026-03-11 00:57:49 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:49.757953 | orchestrator | 2026-03-11 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:52.805764 | orchestrator | 2026-03-11 00:57:52 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state STARTED 2026-03-11 00:57:52.807100 | orchestrator | 2026-03-11 00:57:52 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:52.807238 | orchestrator | 2026-03-11 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:55.868176 | orchestrator | 2026-03-11 00:57:55 | INFO  | Task ada87c7e-5218-4748-938a-80f3f87e807f is in state SUCCESS 2026-03-11 00:57:55.869359 | orchestrator | 2026-03-11 00:57:55.869417 | orchestrator | 2026-03-11 00:57:55.869576 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-11 00:57:55.869589 | orchestrator | 2026-03-11 00:57:55.869593 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-11 00:57:55.869598 | orchestrator | Wednesday 11 March 2026 00:55:06 +0000 (0:00:00.088) 0:00:00.088 ******* 2026-03-11 00:57:55.869602 | orchestrator | ok: [localhost] => { 2026-03-11 00:57:55.869608 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-11 00:57:55.869612 | orchestrator | } 2026-03-11 00:57:55.869617 | orchestrator | 2026-03-11 00:57:55.869621 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-11 00:57:55.869678 | orchestrator | Wednesday 11 March 2026 00:55:06 +0000 (0:00:00.043) 0:00:00.132 ******* 2026-03-11 00:57:55.869684 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-11 00:57:55.869691 | orchestrator | ...ignoring 2026-03-11 00:57:55.869695 | orchestrator | 2026-03-11 00:57:55.869699 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-11 00:57:55.869703 | orchestrator | Wednesday 11 March 2026 00:55:09 +0000 (0:00:02.870) 0:00:03.002 ******* 2026-03-11 00:57:55.869706 | orchestrator | skipping: [localhost] 2026-03-11 00:57:55.869728 | orchestrator | 2026-03-11 00:57:55.869733 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-11 00:57:55.869737 | orchestrator | Wednesday 11 March 2026 00:55:09 +0000 (0:00:00.052) 0:00:03.055 ******* 2026-03-11 00:57:55.869741 | orchestrator | ok: [localhost] 2026-03-11 00:57:55.869744 | orchestrator | 2026-03-11 00:57:55.869748 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:57:55.869752 | orchestrator | 2026-03-11 00:57:55.869756 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:57:55.869760 | orchestrator | Wednesday 11 March 2026 00:55:09 +0000 (0:00:00.173) 0:00:03.228 ******* 2026-03-11 00:57:55.869764 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:55.869768 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:55.869772 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:55.869775 | orchestrator | 2026-03-11 00:57:55.869779 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:57:55.869783 | orchestrator | Wednesday 11 March 2026 00:55:10 +0000 (0:00:00.315) 0:00:03.544 ******* 2026-03-11 00:57:55.869787 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-11 00:57:55.869801 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-11 00:57:55.869805 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-11 00:57:55.869809 | orchestrator | 2026-03-11 00:57:55.869812 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-11 00:57:55.869816 | orchestrator | 2026-03-11 00:57:55.869820 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-11 00:57:55.869823 | orchestrator | Wednesday 11 March 2026 00:55:10 +0000 (0:00:00.540) 0:00:04.085 ******* 2026-03-11 00:57:55.869827 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-11 00:57:55.869832 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-11 00:57:55.869835 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-11 00:57:55.869839 | orchestrator | 2026-03-11 00:57:55.869843 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-11 00:57:55.869847 | orchestrator | Wednesday 11 March 2026 00:55:10 +0000 (0:00:00.348) 0:00:04.433 ******* 2026-03-11 00:57:55.869851 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:57:55.869855 | orchestrator | 2026-03-11 00:57:55.869859 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-11 00:57:55.869864 | orchestrator | Wednesday 11 March 2026 00:55:11 +0000 (0:00:00.527) 0:00:04.961 ******* 2026-03-11 00:57:55.869885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:55.869899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:55.869904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:55.869913 | orchestrator | 2026-03-11 00:57:55.869923 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-11 00:57:55.869927 | orchestrator | Wednesday 11 March 2026 00:55:14 +0000 (0:00:02.985) 0:00:07.947 ******* 2026-03-11 00:57:55.869931 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:55.869935 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.869939 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.869943 | orchestrator | 2026-03-11 00:57:55.869946 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-11 00:57:55.869951 | orchestrator | Wednesday 11 March 2026 00:55:15 +0000 (0:00:00.828) 0:00:08.776 ******* 2026-03-11 00:57:55.869957 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.869963 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.869970 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:55.869976 | orchestrator | 2026-03-11 00:57:55.869982 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-11 00:57:55.869989 | orchestrator | Wednesday 11 March 2026 00:55:16 +0000 (0:00:01.574) 0:00:10.351 ******* 2026-03-11 00:57:55.870000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:55.870059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:55.870077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:55.870084 | orchestrator | 2026-03-11 00:57:55.870088 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-11 00:57:55.870092 | orchestrator | Wednesday 11 March 2026 00:55:20 +0000 (0:00:03.635) 0:00:13.986 ******* 2026-03-11 00:57:55.870097 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.870103 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.870109 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:55.870115 | orchestrator | 2026-03-11 00:57:55.870121 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-11 00:57:55.870128 | orchestrator | Wednesday 11 March 2026 00:55:21 +0000 (0:00:01.157) 0:00:15.144 ******* 2026-03-11 00:57:55.870134 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:55.870141 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:55.870147 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:55.870153 | orchestrator | 2026-03-11 00:57:55.870158 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-11 00:57:55.870164 | orchestrator | Wednesday 11 March 2026 00:55:25 +0000 (0:00:03.995) 0:00:19.140 ******* 2026-03-11 00:57:55.870171 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:57:55.870178 | orchestrator | 2026-03-11 00:57:55.870182 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-11 00:57:55.870190 | orchestrator | Wednesday 11 March 2026 00:55:26 +0000 (0:00:00.509) 0:00:19.649 ******* 2026-03-11 00:57:55.870200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:55.870205 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:55.870212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:55.870216 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.870225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:55.870342 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.870350 | orchestrator | 2026-03-11 00:57:55.870355 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-11 00:57:55.870358 | orchestrator | Wednesday 11 March 2026 00:55:29 +0000 (0:00:03.694) 0:00:23.343 ******* 2026-03-11 00:57:55.870367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:55.870372 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:55.870382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:55.870392 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.870404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:55.870408 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.870412 | orchestrator | 2026-03-11 00:57:55.870416 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-11 00:57:55.870419 | orchestrator | Wednesday 11 March 2026 00:55:32 +0000 (0:00:02.287) 0:00:25.631 ******* 2026-03-11 00:57:55.870430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:55.870435 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:55.870442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:55.870446 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.870450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:55.870458 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.870462 | orchestrator | 2026-03-11 00:57:55.870466 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-11 00:57:55.870470 | orchestrator | Wednesday 11 March 2026 00:55:34 +0000 (0:00:02.489) 0:00:28.120 ******* 2026-03-11 00:57:55.870481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:55.870486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:55.870498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:55.870502 | orchestrator | 2026-03-11 00:57:55.870506 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-11 00:57:55.870510 | orchestrator | Wednesday 11 March 2026 00:55:37 +0000 (0:00:02.732) 0:00:30.852 ******* 2026-03-11 00:57:55.870514 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:55.870518 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:55.870522 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:55.870526 | orchestrator | 2026-03-11 00:57:55.870529 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-11 00:57:55.870534 | orchestrator | Wednesday 11 March 2026 00:55:38 +0000 (0:00:00.831) 0:00:31.684 ******* 2026-03-11 00:57:55.870541 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:55.870545 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:55.870549 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:55.870553 | orchestrator | 2026-03-11 00:57:55.870557 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-11 00:57:55.870561 | orchestrator | Wednesday 11 March 2026 00:55:38 +0000 (0:00:00.380) 0:00:32.065 ******* 2026-03-11 00:57:55.870565 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:55.870569 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:55.870573 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:55.870576 | orchestrator | 2026-03-11 00:57:55.870580 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-11 00:57:55.870584 | orchestrator | Wednesday 11 March 2026 00:55:38 +0000 (0:00:00.279) 0:00:32.345 ******* 2026-03-11 00:57:55.870588 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-11 00:57:55.870593 | orchestrator | ...ignoring 2026-03-11 00:57:55.870597 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-11 00:57:55.870601 | orchestrator | ...ignoring 2026-03-11 00:57:55.870604 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-11 00:57:55.870608 | orchestrator | ...ignoring 2026-03-11 00:57:55.870612 | orchestrator | 2026-03-11 00:57:55.870616 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-11 00:57:55.870619 | orchestrator | Wednesday 11 March 2026 00:55:49 +0000 (0:00:10.827) 0:00:43.172 ******* 2026-03-11 00:57:55.870623 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:55.870645 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:55.870649 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:55.870653 | orchestrator | 2026-03-11 00:57:55.870657 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-11 00:57:55.870661 | orchestrator | Wednesday 11 March 2026 00:55:50 +0000 (0:00:00.409) 0:00:43.581 ******* 2026-03-11 00:57:55.870664 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:55.870694 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.870698 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.870702 | orchestrator | 2026-03-11 00:57:55.870706 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-11 00:57:55.870710 | orchestrator | Wednesday 11 March 2026 00:55:50 +0000 (0:00:00.647) 0:00:44.229 ******* 2026-03-11 00:57:55.870713 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:55.870717 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.870721 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.870725 | orchestrator | 2026-03-11 00:57:55.870728 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-11 00:57:55.870732 | orchestrator | Wednesday 11 March 2026 00:55:51 +0000 (0:00:00.414) 0:00:44.643 ******* 2026-03-11 00:57:55.870737 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:55.870743 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.870750 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.870759 | orchestrator | 2026-03-11 00:57:55.870767 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-11 00:57:55.870780 | orchestrator | Wednesday 11 March 2026 00:55:51 +0000 (0:00:00.399) 0:00:45.043 ******* 2026-03-11 00:57:55.870787 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:55.870793 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:55.870808 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:55.870814 | orchestrator | 2026-03-11 00:57:55.870820 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-11 00:57:55.870826 | orchestrator | Wednesday 11 March 2026 00:55:51 +0000 (0:00:00.412) 0:00:45.456 ******* 2026-03-11 00:57:55.870838 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:55.870844 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.870852 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.870858 | orchestrator | 2026-03-11 00:57:55.870864 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-11 00:57:55.870869 | orchestrator | Wednesday 11 March 2026 00:55:52 +0000 (0:00:00.672) 0:00:46.129 ******* 2026-03-11 00:57:55.870875 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.870881 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.870887 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-11 00:57:55.870894 | orchestrator | 2026-03-11 00:57:55.870899 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-11 00:57:55.870905 | orchestrator | Wednesday 11 March 2026 00:55:52 +0000 (0:00:00.370) 0:00:46.499 ******* 2026-03-11 00:57:55.870912 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:55.870918 | orchestrator | 2026-03-11 00:57:55.870923 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-11 00:57:55.870929 | orchestrator | Wednesday 11 March 2026 00:56:02 +0000 (0:00:09.364) 0:00:55.864 ******* 2026-03-11 00:57:55.870935 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:55.870941 | orchestrator | 2026-03-11 00:57:55.870947 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-11 00:57:55.870953 | orchestrator | Wednesday 11 March 2026 00:56:02 +0000 (0:00:00.123) 0:00:55.987 ******* 2026-03-11 00:57:55.870960 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:55.870966 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.870973 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.870980 | orchestrator | 2026-03-11 00:57:55.870985 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-11 00:57:55.870990 | orchestrator | Wednesday 11 March 2026 00:56:03 +0000 (0:00:00.937) 0:00:56.924 ******* 2026-03-11 00:57:55.870994 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:55.870999 | orchestrator | 2026-03-11 00:57:55.871003 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-11 00:57:55.871012 | orchestrator | Wednesday 11 March 2026 00:56:10 +0000 (0:00:07.393) 0:01:04.318 ******* 2026-03-11 00:57:55.871016 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:55.871020 | orchestrator | 2026-03-11 00:57:55.871025 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-11 00:57:55.871029 | orchestrator | Wednesday 11 March 2026 00:56:12 +0000 (0:00:01.623) 0:01:05.941 ******* 2026-03-11 00:57:55.871033 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:55.871038 | orchestrator | 2026-03-11 00:57:55.871043 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-11 00:57:55.871047 | orchestrator | Wednesday 11 March 2026 00:56:14 +0000 (0:00:02.223) 0:01:08.165 ******* 2026-03-11 00:57:55.871052 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:55.871056 | orchestrator | 2026-03-11 00:57:55.871060 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-11 00:57:55.871065 | orchestrator | Wednesday 11 March 2026 00:56:14 +0000 (0:00:00.122) 0:01:08.287 ******* 2026-03-11 00:57:55.871069 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:55.871074 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.871079 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.871083 | orchestrator | 2026-03-11 00:57:55.871088 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-11 00:57:55.871092 | orchestrator | Wednesday 11 March 2026 00:56:15 +0000 (0:00:00.267) 0:01:08.554 ******* 2026-03-11 00:57:55.871096 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:55.871101 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-11 00:57:55.871105 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:55.871110 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:55.871114 | orchestrator | 2026-03-11 00:57:55.871123 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-11 00:57:55.871128 | orchestrator | skipping: no hosts matched 2026-03-11 00:57:55.871132 | orchestrator | 2026-03-11 00:57:55.871136 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-11 00:57:55.871140 | orchestrator | 2026-03-11 00:57:55.871144 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-11 00:57:55.871149 | orchestrator | Wednesday 11 March 2026 00:56:15 +0000 (0:00:00.522) 0:01:09.077 ******* 2026-03-11 00:57:55.871153 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:55.871157 | orchestrator | 2026-03-11 00:57:55.871162 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-11 00:57:55.871166 | orchestrator | Wednesday 11 March 2026 00:56:35 +0000 (0:00:19.626) 0:01:28.703 ******* 2026-03-11 00:57:55.871171 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:55.871175 | orchestrator | 2026-03-11 00:57:55.871180 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-11 00:57:55.871184 | orchestrator | Wednesday 11 March 2026 00:56:45 +0000 (0:00:10.555) 0:01:39.259 ******* 2026-03-11 00:57:55.871188 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:55.871193 | orchestrator | 2026-03-11 00:57:55.871197 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-11 00:57:55.871201 | orchestrator | 2026-03-11 00:57:55.871205 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-11 00:57:55.871210 | orchestrator | Wednesday 11 March 2026 00:56:48 +0000 (0:00:02.419) 0:01:41.679 ******* 2026-03-11 00:57:55.871214 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:55.871218 | orchestrator | 2026-03-11 00:57:55.871223 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-11 00:57:55.871232 | orchestrator | Wednesday 11 March 2026 00:57:04 +0000 (0:00:16.200) 0:01:57.879 ******* 2026-03-11 00:57:55.871236 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:55.871240 | orchestrator | 2026-03-11 00:57:55.871245 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-11 00:57:55.871249 | orchestrator | Wednesday 11 March 2026 00:57:19 +0000 (0:00:15.495) 0:02:13.374 ******* 2026-03-11 00:57:55.871253 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:55.871258 | orchestrator | 2026-03-11 00:57:55.871262 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-11 00:57:55.871267 | orchestrator | 2026-03-11 00:57:55.871271 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-11 00:57:55.871275 | orchestrator | Wednesday 11 March 2026 00:57:22 +0000 (0:00:02.660) 0:02:16.035 ******* 2026-03-11 00:57:55.871280 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:55.871284 | orchestrator | 2026-03-11 00:57:55.871289 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-11 00:57:55.871293 | orchestrator | Wednesday 11 March 2026 00:57:33 +0000 (0:00:10.832) 0:02:26.868 ******* 2026-03-11 00:57:55.871297 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:55.871301 | orchestrator | 2026-03-11 00:57:55.871312 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-11 00:57:55.871317 | orchestrator | Wednesday 11 March 2026 00:57:37 +0000 (0:00:04.554) 0:02:31.422 ******* 2026-03-11 00:57:55.871321 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:55.871325 | orchestrator | 2026-03-11 00:57:55.871330 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-11 00:57:55.871336 | orchestrator | 2026-03-11 00:57:55.871342 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-11 00:57:55.871347 | orchestrator | Wednesday 11 March 2026 00:57:40 +0000 (0:00:02.618) 0:02:34.041 ******* 2026-03-11 00:57:55.871355 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:57:55.871364 | orchestrator | 2026-03-11 00:57:55.871370 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-11 00:57:55.871377 | orchestrator | Wednesday 11 March 2026 00:57:41 +0000 (0:00:00.537) 0:02:34.579 ******* 2026-03-11 00:57:55.871388 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.871394 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.871400 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:55.871406 | orchestrator | 2026-03-11 00:57:55.871412 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-11 00:57:55.871418 | orchestrator | Wednesday 11 March 2026 00:57:43 +0000 (0:00:02.453) 0:02:37.032 ******* 2026-03-11 00:57:55.871424 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.871430 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.871440 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:55.871448 | orchestrator | 2026-03-11 00:57:55.871452 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-11 00:57:55.871456 | orchestrator | Wednesday 11 March 2026 00:57:45 +0000 (0:00:01.884) 0:02:38.917 ******* 2026-03-11 00:57:55.871460 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.871463 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.871467 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:55.871471 | orchestrator | 2026-03-11 00:57:55.871475 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-11 00:57:55.871478 | orchestrator | Wednesday 11 March 2026 00:57:47 +0000 (0:00:01.964) 0:02:40.881 ******* 2026-03-11 00:57:55.871482 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.871486 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.871489 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:55.871493 | orchestrator | 2026-03-11 00:57:55.871497 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-11 00:57:55.871500 | orchestrator | Wednesday 11 March 2026 00:57:49 +0000 (0:00:01.944) 0:02:42.826 ******* 2026-03-11 00:57:55.871504 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:55.871508 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:55.871512 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:55.871515 | orchestrator | 2026-03-11 00:57:55.871519 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-11 00:57:55.871523 | orchestrator | Wednesday 11 March 2026 00:57:52 +0000 (0:00:03.063) 0:02:45.890 ******* 2026-03-11 00:57:55.871527 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:55.871531 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:55.871535 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:55.871538 | orchestrator | 2026-03-11 00:57:55.871542 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:57:55.871547 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-11 00:57:55.871551 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-11 00:57:55.871556 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-11 00:57:55.871560 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-11 00:57:55.871564 | orchestrator | 2026-03-11 00:57:55.871568 | orchestrator | 2026-03-11 00:57:55.871572 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:57:55.871575 | orchestrator | Wednesday 11 March 2026 00:57:52 +0000 (0:00:00.232) 0:02:46.122 ******* 2026-03-11 00:57:55.871579 | orchestrator | =============================================================================== 2026-03-11 00:57:55.871583 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.83s 2026-03-11 00:57:55.871587 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.05s 2026-03-11 00:57:55.871599 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.83s 2026-03-11 00:57:55.871612 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.83s 2026-03-11 00:57:55.871846 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.36s 2026-03-11 00:57:55.871862 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.39s 2026-03-11 00:57:55.871868 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.08s 2026-03-11 00:57:55.871875 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.55s 2026-03-11 00:57:55.871881 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.00s 2026-03-11 00:57:55.871887 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.69s 2026-03-11 00:57:55.871893 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.64s 2026-03-11 00:57:55.871900 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.06s 2026-03-11 00:57:55.871906 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.99s 2026-03-11 00:57:55.871912 | orchestrator | Check MariaDB service --------------------------------------------------- 2.87s 2026-03-11 00:57:55.871918 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.73s 2026-03-11 00:57:55.871924 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.62s 2026-03-11 00:57:55.871930 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.49s 2026-03-11 00:57:55.871936 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.45s 2026-03-11 00:57:55.871943 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.29s 2026-03-11 00:57:55.871949 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.22s 2026-03-11 00:57:55.871956 | orchestrator | 2026-03-11 00:57:55 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:55.871963 | orchestrator | 2026-03-11 00:57:55 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:57:55.871982 | orchestrator | 2026-03-11 00:57:55 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:57:55.871990 | orchestrator | 2026-03-11 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:58.913860 | orchestrator | 2026-03-11 00:57:58 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:57:58.915950 | orchestrator | 2026-03-11 00:57:58 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:57:58.916835 | orchestrator | 2026-03-11 00:57:58 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:57:58.916866 | orchestrator | 2026-03-11 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:01.954738 | orchestrator | 2026-03-11 00:58:01 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:01.954844 | orchestrator | 2026-03-11 00:58:01 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:01.956051 | orchestrator | 2026-03-11 00:58:01 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:01.956274 | orchestrator | 2026-03-11 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:05.003991 | orchestrator | 2026-03-11 00:58:05 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:05.006745 | orchestrator | 2026-03-11 00:58:05 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:05.008577 | orchestrator | 2026-03-11 00:58:05 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:05.008635 | orchestrator | 2026-03-11 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:08.050170 | orchestrator | 2026-03-11 00:58:08 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:08.052224 | orchestrator | 2026-03-11 00:58:08 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:08.054811 | orchestrator | 2026-03-11 00:58:08 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:08.054875 | orchestrator | 2026-03-11 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:11.089813 | orchestrator | 2026-03-11 00:58:11 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:11.090528 | orchestrator | 2026-03-11 00:58:11 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:11.091925 | orchestrator | 2026-03-11 00:58:11 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:11.091949 | orchestrator | 2026-03-11 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:14.140897 | orchestrator | 2026-03-11 00:58:14 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:14.141550 | orchestrator | 2026-03-11 00:58:14 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:14.142266 | orchestrator | 2026-03-11 00:58:14 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:14.142382 | orchestrator | 2026-03-11 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:17.181412 | orchestrator | 2026-03-11 00:58:17 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:17.184452 | orchestrator | 2026-03-11 00:58:17 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:17.185657 | orchestrator | 2026-03-11 00:58:17 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:17.185714 | orchestrator | 2026-03-11 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:20.226006 | orchestrator | 2026-03-11 00:58:20 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:20.228522 | orchestrator | 2026-03-11 00:58:20 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:20.231083 | orchestrator | 2026-03-11 00:58:20 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:20.231127 | orchestrator | 2026-03-11 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:23.259925 | orchestrator | 2026-03-11 00:58:23 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:23.260660 | orchestrator | 2026-03-11 00:58:23 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:23.261453 | orchestrator | 2026-03-11 00:58:23 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:23.261479 | orchestrator | 2026-03-11 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:26.304615 | orchestrator | 2026-03-11 00:58:26 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:26.306377 | orchestrator | 2026-03-11 00:58:26 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:26.307325 | orchestrator | 2026-03-11 00:58:26 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:26.307358 | orchestrator | 2026-03-11 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:29.341683 | orchestrator | 2026-03-11 00:58:29 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:29.342633 | orchestrator | 2026-03-11 00:58:29 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:29.344482 | orchestrator | 2026-03-11 00:58:29 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:29.344527 | orchestrator | 2026-03-11 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:32.381567 | orchestrator | 2026-03-11 00:58:32 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:32.383295 | orchestrator | 2026-03-11 00:58:32 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:32.385102 | orchestrator | 2026-03-11 00:58:32 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:32.385288 | orchestrator | 2026-03-11 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:35.427847 | orchestrator | 2026-03-11 00:58:35 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:35.429552 | orchestrator | 2026-03-11 00:58:35 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:35.430911 | orchestrator | 2026-03-11 00:58:35 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:35.430963 | orchestrator | 2026-03-11 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:38.473212 | orchestrator | 2026-03-11 00:58:38 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:38.474677 | orchestrator | 2026-03-11 00:58:38 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:38.475950 | orchestrator | 2026-03-11 00:58:38 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:38.475980 | orchestrator | 2026-03-11 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:41.514473 | orchestrator | 2026-03-11 00:58:41 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:41.516392 | orchestrator | 2026-03-11 00:58:41 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:41.518939 | orchestrator | 2026-03-11 00:58:41 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:41.518984 | orchestrator | 2026-03-11 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:44.559505 | orchestrator | 2026-03-11 00:58:44 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:44.559670 | orchestrator | 2026-03-11 00:58:44 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:44.560464 | orchestrator | 2026-03-11 00:58:44 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:44.560507 | orchestrator | 2026-03-11 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:47.610272 | orchestrator | 2026-03-11 00:58:47 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:47.612679 | orchestrator | 2026-03-11 00:58:47 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:47.614642 | orchestrator | 2026-03-11 00:58:47 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:47.614691 | orchestrator | 2026-03-11 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:50.665378 | orchestrator | 2026-03-11 00:58:50 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:50.666826 | orchestrator | 2026-03-11 00:58:50 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:50.668601 | orchestrator | 2026-03-11 00:58:50 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:50.668661 | orchestrator | 2026-03-11 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:53.708261 | orchestrator | 2026-03-11 00:58:53 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:53.709753 | orchestrator | 2026-03-11 00:58:53 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:53.711547 | orchestrator | 2026-03-11 00:58:53 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:53.711607 | orchestrator | 2026-03-11 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:56.754999 | orchestrator | 2026-03-11 00:58:56 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state STARTED 2026-03-11 00:58:56.756605 | orchestrator | 2026-03-11 00:58:56 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:56.758838 | orchestrator | 2026-03-11 00:58:56 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:56.758884 | orchestrator | 2026-03-11 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:59.815354 | orchestrator | 2026-03-11 00:58:59 | INFO  | Task 977306e0-6475-477d-9138-2a6cf09aefa4 is in state SUCCESS 2026-03-11 00:58:59.817961 | orchestrator | 2026-03-11 00:58:59.818649 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-11 00:58:59.818676 | orchestrator | 2.16.14 2026-03-11 00:58:59.818688 | orchestrator | 2026-03-11 00:58:59.818698 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-11 00:58:59.818709 | orchestrator | 2026-03-11 00:58:59.818720 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-11 00:58:59.818730 | orchestrator | Wednesday 11 March 2026 00:56:56 +0000 (0:00:00.455) 0:00:00.455 ******* 2026-03-11 00:58:59.818741 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:58:59.818752 | orchestrator | 2026-03-11 00:58:59.818763 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-11 00:58:59.818774 | orchestrator | Wednesday 11 March 2026 00:56:57 +0000 (0:00:00.475) 0:00:00.931 ******* 2026-03-11 00:58:59.818786 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:59.818797 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:59.818808 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:59.818816 | orchestrator | 2026-03-11 00:58:59.818823 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-11 00:58:59.818829 | orchestrator | Wednesday 11 March 2026 00:56:58 +0000 (0:00:00.613) 0:00:01.545 ******* 2026-03-11 00:58:59.818836 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:59.818843 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:59.818849 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:59.818856 | orchestrator | 2026-03-11 00:58:59.818862 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-11 00:58:59.818869 | orchestrator | Wednesday 11 March 2026 00:56:58 +0000 (0:00:00.311) 0:00:01.856 ******* 2026-03-11 00:58:59.818875 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:59.818881 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:59.818888 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:59.818894 | orchestrator | 2026-03-11 00:58:59.818901 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-11 00:58:59.818907 | orchestrator | Wednesday 11 March 2026 00:56:59 +0000 (0:00:00.795) 0:00:02.652 ******* 2026-03-11 00:58:59.818913 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:59.818943 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:59.818950 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:59.818956 | orchestrator | 2026-03-11 00:58:59.818963 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-11 00:58:59.818970 | orchestrator | Wednesday 11 March 2026 00:56:59 +0000 (0:00:00.256) 0:00:02.909 ******* 2026-03-11 00:58:59.818976 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:59.818982 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:59.818989 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:59.818995 | orchestrator | 2026-03-11 00:58:59.819001 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-11 00:58:59.819008 | orchestrator | Wednesday 11 March 2026 00:56:59 +0000 (0:00:00.266) 0:00:03.175 ******* 2026-03-11 00:58:59.819014 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:59.819037 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:59.819044 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:59.819051 | orchestrator | 2026-03-11 00:58:59.819060 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-11 00:58:59.819069 | orchestrator | Wednesday 11 March 2026 00:56:59 +0000 (0:00:00.276) 0:00:03.452 ******* 2026-03-11 00:58:59.819079 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.819091 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.819101 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.819111 | orchestrator | 2026-03-11 00:58:59.819121 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-11 00:58:59.819132 | orchestrator | Wednesday 11 March 2026 00:57:00 +0000 (0:00:00.365) 0:00:03.817 ******* 2026-03-11 00:58:59.819138 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:59.819145 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:59.819151 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:59.819157 | orchestrator | 2026-03-11 00:58:59.819163 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-11 00:58:59.819170 | orchestrator | Wednesday 11 March 2026 00:57:00 +0000 (0:00:00.244) 0:00:04.062 ******* 2026-03-11 00:58:59.819176 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:58:59.819185 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:58:59.819211 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:58:59.819221 | orchestrator | 2026-03-11 00:58:59.819230 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-11 00:58:59.819239 | orchestrator | Wednesday 11 March 2026 00:57:01 +0000 (0:00:00.552) 0:00:04.615 ******* 2026-03-11 00:58:59.819250 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:59.819259 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:59.819268 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:59.819277 | orchestrator | 2026-03-11 00:58:59.819287 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-11 00:58:59.819298 | orchestrator | Wednesday 11 March 2026 00:57:01 +0000 (0:00:00.386) 0:00:05.002 ******* 2026-03-11 00:58:59.819308 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:58:59.819319 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:58:59.819329 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:58:59.819340 | orchestrator | 2026-03-11 00:58:59.819350 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-11 00:58:59.819361 | orchestrator | Wednesday 11 March 2026 00:57:03 +0000 (0:00:01.968) 0:00:06.970 ******* 2026-03-11 00:58:59.819372 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-11 00:58:59.819383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-11 00:58:59.819393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-11 00:58:59.819401 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.819418 | orchestrator | 2026-03-11 00:58:59.819474 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-11 00:58:59.819481 | orchestrator | Wednesday 11 March 2026 00:57:04 +0000 (0:00:00.592) 0:00:07.562 ******* 2026-03-11 00:58:59.819490 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.819499 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.819506 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.819513 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.819519 | orchestrator | 2026-03-11 00:58:59.819525 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-11 00:58:59.819532 | orchestrator | Wednesday 11 March 2026 00:57:04 +0000 (0:00:00.767) 0:00:08.330 ******* 2026-03-11 00:58:59.819540 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.819548 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.819555 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.819561 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.819598 | orchestrator | 2026-03-11 00:58:59.819609 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-11 00:58:59.819615 | orchestrator | Wednesday 11 March 2026 00:57:05 +0000 (0:00:00.333) 0:00:08.664 ******* 2026-03-11 00:58:59.819631 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '02f046d0e698', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-11 00:57:02.196898', 'end': '2026-03-11 00:57:02.222928', 'delta': '0:00:00.026030', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['02f046d0e698'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-11 00:58:59.819641 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '61b7433eb860', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-11 00:57:02.842430', 'end': '2026-03-11 00:57:02.869391', 'delta': '0:00:00.026961', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['61b7433eb860'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-11 00:58:59.819757 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6fca6e2bd464', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-11 00:57:03.352049', 'end': '2026-03-11 00:57:03.379093', 'delta': '0:00:00.027044', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6fca6e2bd464'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-11 00:58:59.819775 | orchestrator | 2026-03-11 00:58:59.819785 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-11 00:58:59.819795 | orchestrator | Wednesday 11 March 2026 00:57:05 +0000 (0:00:00.199) 0:00:08.863 ******* 2026-03-11 00:58:59.819805 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:59.819815 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:59.819825 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:59.819836 | orchestrator | 2026-03-11 00:58:59.819846 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-11 00:58:59.819856 | orchestrator | Wednesday 11 March 2026 00:57:05 +0000 (0:00:00.408) 0:00:09.272 ******* 2026-03-11 00:58:59.819868 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-11 00:58:59.819874 | orchestrator | 2026-03-11 00:58:59.819880 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-11 00:58:59.819887 | orchestrator | Wednesday 11 March 2026 00:57:07 +0000 (0:00:01.644) 0:00:10.916 ******* 2026-03-11 00:58:59.819893 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.819899 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.819905 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.819912 | orchestrator | 2026-03-11 00:58:59.819918 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-11 00:58:59.819924 | orchestrator | Wednesday 11 March 2026 00:57:07 +0000 (0:00:00.311) 0:00:11.228 ******* 2026-03-11 00:58:59.819930 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.819936 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.819943 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.819949 | orchestrator | 2026-03-11 00:58:59.819955 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-11 00:58:59.819961 | orchestrator | Wednesday 11 March 2026 00:57:08 +0000 (0:00:00.397) 0:00:11.626 ******* 2026-03-11 00:58:59.819967 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.819973 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.819979 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.819985 | orchestrator | 2026-03-11 00:58:59.819992 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-11 00:58:59.819998 | orchestrator | Wednesday 11 March 2026 00:57:08 +0000 (0:00:00.448) 0:00:12.075 ******* 2026-03-11 00:58:59.820004 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:59.820010 | orchestrator | 2026-03-11 00:58:59.820016 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-11 00:58:59.820022 | orchestrator | Wednesday 11 March 2026 00:57:08 +0000 (0:00:00.129) 0:00:12.204 ******* 2026-03-11 00:58:59.820028 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.820042 | orchestrator | 2026-03-11 00:58:59.820049 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-11 00:58:59.820055 | orchestrator | Wednesday 11 March 2026 00:57:08 +0000 (0:00:00.233) 0:00:12.438 ******* 2026-03-11 00:58:59.820061 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.820067 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.820073 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.820079 | orchestrator | 2026-03-11 00:58:59.820086 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-11 00:58:59.820092 | orchestrator | Wednesday 11 March 2026 00:57:09 +0000 (0:00:00.314) 0:00:12.752 ******* 2026-03-11 00:58:59.820098 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.820123 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.820138 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.820145 | orchestrator | 2026-03-11 00:58:59.820151 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-11 00:58:59.820157 | orchestrator | Wednesday 11 March 2026 00:57:09 +0000 (0:00:00.318) 0:00:13.071 ******* 2026-03-11 00:58:59.820164 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.820170 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.820176 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.820182 | orchestrator | 2026-03-11 00:58:59.820188 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-11 00:58:59.820194 | orchestrator | Wednesday 11 March 2026 00:57:10 +0000 (0:00:00.475) 0:00:13.547 ******* 2026-03-11 00:58:59.820201 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.820207 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.820213 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.820219 | orchestrator | 2026-03-11 00:58:59.820225 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-11 00:58:59.820231 | orchestrator | Wednesday 11 March 2026 00:57:10 +0000 (0:00:00.336) 0:00:13.884 ******* 2026-03-11 00:58:59.820237 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.820243 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.820250 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.820256 | orchestrator | 2026-03-11 00:58:59.820262 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-11 00:58:59.820268 | orchestrator | Wednesday 11 March 2026 00:57:10 +0000 (0:00:00.309) 0:00:14.193 ******* 2026-03-11 00:58:59.820274 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.820280 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.820286 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.820327 | orchestrator | 2026-03-11 00:58:59.820345 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-11 00:58:59.820356 | orchestrator | Wednesday 11 March 2026 00:57:11 +0000 (0:00:00.318) 0:00:14.511 ******* 2026-03-11 00:58:59.820365 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.820375 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.820384 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.820393 | orchestrator | 2026-03-11 00:58:59.820403 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-11 00:58:59.820412 | orchestrator | Wednesday 11 March 2026 00:57:11 +0000 (0:00:00.534) 0:00:15.046 ******* 2026-03-11 00:58:59.820424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1f24027a--cb62--5112--a2b4--0ff1a158a780-osd--block--1f24027a--cb62--5112--a2b4--0ff1a158a780', 'dm-uuid-LVM-VhTvUy8RvGHmgQbSGejj2cFr5C79WFT6Sw4HHKX2gQ9Zm965zwcEXUzxkMLrdzNW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--930a51f3--082d--5f24--af57--1314a0ff4b68-osd--block--930a51f3--082d--5f24--af57--1314a0ff4b68', 'dm-uuid-LVM-Ibnvjb7qiyL3oKlGZEawB6I1PxbXAVvpsHGJ4HPJaZl9NC2bCMa0fe5u5ROaJIBl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part1', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part14', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part15', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part16', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:59.820636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1f24027a--cb62--5112--a2b4--0ff1a158a780-osd--block--1f24027a--cb62--5112--a2b4--0ff1a158a780'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sTmUoR-Ut6J-4hP1-1GLB-Jxdn-0eBV-X9DQAQ', 'scsi-0QEMU_QEMU_HARDDISK_bb163787-5642-41ea-bb50-14394c4239c7', 'scsi-SQEMU_QEMU_HARDDISK_bb163787-5642-41ea-bb50-14394c4239c7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:59.820667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--930a51f3--082d--5f24--af57--1314a0ff4b68-osd--block--930a51f3--082d--5f24--af57--1314a0ff4b68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VS3hfm-tDrl-9AMM-2hPw-Q0ky-zJOF-9LCQvj', 'scsi-0QEMU_QEMU_HARDDISK_ac11d5e4-d53a-4ea4-b7b7-81bfc32957e4', 'scsi-SQEMU_QEMU_HARDDISK_ac11d5e4-d53a-4ea4-b7b7-81bfc32957e4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:59.820675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9a64462a--5614--5a25--979d--2f017565a0c4-osd--block--9a64462a--5614--5a25--979d--2f017565a0c4', 'dm-uuid-LVM-ayxYQM6BgxOnDbQpTfY36B6k6R58GQx52b9wUaDw5kmGghdJfV78isyTrF2Db4mX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef09bb17-59a8-4317-bed7-0146c94a1062', 'scsi-SQEMU_QEMU_HARDDISK_ef09bb17-59a8-4317-bed7-0146c94a1062'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:59.820702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e6773b3--a2d9--5476--8e14--434a68284534-osd--block--9e6773b3--a2d9--5476--8e14--434a68284534', 'dm-uuid-LVM-zXdwVZqaatHAISu1ScQeMh8An0eym0d9aeSkX7kRNauWhsMRMPGSMzb91ZF2UJf3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:59.820720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820785 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.820791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part1', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part14', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part15', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part16', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:59.820823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9a64462a--5614--5a25--979d--2f017565a0c4-osd--block--9a64462a--5614--5a25--979d--2f017565a0c4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3lUuww-Veet-z76Z-cWCI-ccba-Waub-32H1PZ', 'scsi-0QEMU_QEMU_HARDDISK_747dd4bc-1e4a-4053-bdf0-887e0b92b80b', 'scsi-SQEMU_QEMU_HARDDISK_747dd4bc-1e4a-4053-bdf0-887e0b92b80b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:59.820834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9e6773b3--a2d9--5476--8e14--434a68284534-osd--block--9e6773b3--a2d9--5476--8e14--434a68284534'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ttRbEm-RD1J-jehV-cszL-zUf6-jVNf-8qcgVJ', 'scsi-0QEMU_QEMU_HARDDISK_1656cb8a-d6e3-4504-aba1-0af808046f0d', 'scsi-SQEMU_QEMU_HARDDISK_1656cb8a-d6e3-4504-aba1-0af808046f0d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:59.820922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37fe87c5-ca63-4522-b75b-0d9e996155b4', 'scsi-SQEMU_QEMU_HARDDISK_37fe87c5-ca63-4522-b75b-0d9e996155b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:59.820946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:59.820953 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.820960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9-osd--block--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9', 'dm-uuid-LVM-RwBigCbtDnPNtpLNd3NQBMoVopg18EfqpOkfQGT603HfLPQy3J2C48eLgkQMUYmY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--12aec0f2--63b1--5667--a447--7095f264ece1-osd--block--12aec0f2--63b1--5667--a447--7095f264ece1', 'dm-uuid-LVM-eLU561C8FCWuxkw37i12AU1RPhNNcWoCbCF5MFGSO9qg37pntjfArU8cBAYHmszD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.820994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.821000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.821007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.821013 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.821019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.821029 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.821036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:59.821049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part1', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part14', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part15', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part16', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:59.821062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9-osd--block--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WqRIbN-wezn-9aAS-9Bct-7SUf-mOKz-kuNUw2', 'scsi-0QEMU_QEMU_HARDDISK_cd4ac081-6fbb-4e27-9e74-8104c0078ac5', 'scsi-SQEMU_QEMU_HARDDISK_cd4ac081-6fbb-4e27-9e74-8104c0078ac5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:59.821072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--12aec0f2--63b1--5667--a447--7095f264ece1-osd--block--12aec0f2--63b1--5667--a447--7095f264ece1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3xMrj1-W0UW-AeFs-gIlM-Xkde-1FKU-FN31Yv', 'scsi-0QEMU_QEMU_HARDDISK_3907c798-8bb0-4366-8422-7f195107ce20', 'scsi-SQEMU_QEMU_HARDDISK_3907c798-8bb0-4366-8422-7f195107ce20'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:59.821079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c68b4db-9517-4776-878e-5cc78b8cffbb', 'scsi-SQEMU_QEMU_HARDDISK_7c68b4db-9517-4776-878e-5cc78b8cffbb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:59.821090 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:59.821101 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.821108 | orchestrator | 2026-03-11 00:58:59.821114 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-11 00:58:59.821120 | orchestrator | Wednesday 11 March 2026 00:57:12 +0000 (0:00:00.566) 0:00:15.613 ******* 2026-03-11 00:58:59.821127 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1f24027a--cb62--5112--a2b4--0ff1a158a780-osd--block--1f24027a--cb62--5112--a2b4--0ff1a158a780', 'dm-uuid-LVM-VhTvUy8RvGHmgQbSGejj2cFr5C79WFT6Sw4HHKX2gQ9Zm965zwcEXUzxkMLrdzNW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821135 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--930a51f3--082d--5f24--af57--1314a0ff4b68-osd--block--930a51f3--082d--5f24--af57--1314a0ff4b68', 'dm-uuid-LVM-Ibnvjb7qiyL3oKlGZEawB6I1PxbXAVvpsHGJ4HPJaZl9NC2bCMa0fe5u5ROaJIBl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821151 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821173 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821179 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9a64462a--5614--5a25--979d--2f017565a0c4-osd--block--9a64462a--5614--5a25--979d--2f017565a0c4', 'dm-uuid-LVM-ayxYQM6BgxOnDbQpTfY36B6k6R58GQx52b9wUaDw5kmGghdJfV78isyTrF2Db4mX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821185 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821191 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e6773b3--a2d9--5476--8e14--434a68284534-osd--block--9e6773b3--a2d9--5476--8e14--434a68284534', 'dm-uuid-LVM-zXdwVZqaatHAISu1ScQeMh8An0eym0d9aeSkX7kRNauWhsMRMPGSMzb91ZF2UJf3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821199 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821205 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821218 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821224 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821230 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821236 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821253 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part1', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part14', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part15', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part16', 'scsi-SQEMU_QEMU_HARDDISK_00967594-40dd-4a79-bd3f-9f82494451f1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821264 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821270 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1f24027a--cb62--5112--a2b4--0ff1a158a780-osd--block--1f24027a--cb62--5112--a2b4--0ff1a158a780'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sTmUoR-Ut6J-4hP1-1GLB-Jxdn-0eBV-X9DQAQ', 'scsi-0QEMU_QEMU_HARDDISK_bb163787-5642-41ea-bb50-14394c4239c7', 'scsi-SQEMU_QEMU_HARDDISK_bb163787-5642-41ea-bb50-14394c4239c7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821276 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821284 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--930a51f3--082d--5f24--af57--1314a0ff4b68-osd--block--930a51f3--082d--5f24--af57--1314a0ff4b68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VS3hfm-tDrl-9AMM-2hPw-Q0ky-zJOF-9LCQvj', 'scsi-0QEMU_QEMU_HARDDISK_ac11d5e4-d53a-4ea4-b7b7-81bfc32957e4', 'scsi-SQEMU_QEMU_HARDDISK_ac11d5e4-d53a-4ea4-b7b7-81bfc32957e4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821294 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821304 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef09bb17-59a8-4317-bed7-0146c94a1062', 'scsi-SQEMU_QEMU_HARDDISK_ef09bb17-59a8-4317-bed7-0146c94a1062'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821310 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821316 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821322 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821335 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part1', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part14', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part15', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part16', 'scsi-SQEMU_QEMU_HARDDISK_317f502c-791e-4152-8dc2-509ac4c350a5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821345 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.821351 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9a64462a--5614--5a25--979d--2f017565a0c4-osd--block--9a64462a--5614--5a25--979d--2f017565a0c4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3lUuww-Veet-z76Z-cWCI-ccba-Waub-32H1PZ', 'scsi-0QEMU_QEMU_HARDDISK_747dd4bc-1e4a-4053-bdf0-887e0b92b80b', 'scsi-SQEMU_QEMU_HARDDISK_747dd4bc-1e4a-4053-bdf0-887e0b92b80b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821357 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9e6773b3--a2d9--5476--8e14--434a68284534-osd--block--9e6773b3--a2d9--5476--8e14--434a68284534'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ttRbEm-RD1J-jehV-cszL-zUf6-jVNf-8qcgVJ', 'scsi-0QEMU_QEMU_HARDDISK_1656cb8a-d6e3-4504-aba1-0af808046f0d', 'scsi-SQEMU_QEMU_HARDDISK_1656cb8a-d6e3-4504-aba1-0af808046f0d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821370 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37fe87c5-ca63-4522-b75b-0d9e996155b4', 'scsi-SQEMU_QEMU_HARDDISK_37fe87c5-ca63-4522-b75b-0d9e996155b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821381 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821387 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.821392 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9-osd--block--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9', 'dm-uuid-LVM-RwBigCbtDnPNtpLNd3NQBMoVopg18EfqpOkfQGT603HfLPQy3J2C48eLgkQMUYmY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821398 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--12aec0f2--63b1--5667--a447--7095f264ece1-osd--block--12aec0f2--63b1--5667--a447--7095f264ece1', 'dm-uuid-LVM-eLU561C8FCWuxkw37i12AU1RPhNNcWoCbCF5MFGSO9qg37pntjfArU8cBAYHmszD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821404 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821412 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821422 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821432 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821438 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821443 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821449 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821455 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821472 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part1', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part14', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part15', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part16', 'scsi-SQEMU_QEMU_HARDDISK_fbed40d1-1e79-4316-99a8-e618a0da2df7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821478 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9-osd--block--5d149e3f--abc8--57c5--b2f4--c991fc87e4f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WqRIbN-wezn-9aAS-9Bct-7SUf-mOKz-kuNUw2', 'scsi-0QEMU_QEMU_HARDDISK_cd4ac081-6fbb-4e27-9e74-8104c0078ac5', 'scsi-SQEMU_QEMU_HARDDISK_cd4ac081-6fbb-4e27-9e74-8104c0078ac5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821484 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--12aec0f2--63b1--5667--a447--7095f264ece1-osd--block--12aec0f2--63b1--5667--a447--7095f264ece1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3xMrj1-W0UW-AeFs-gIlM-Xkde-1FKU-FN31Yv', 'scsi-0QEMU_QEMU_HARDDISK_3907c798-8bb0-4366-8422-7f195107ce20', 'scsi-SQEMU_QEMU_HARDDISK_3907c798-8bb0-4366-8422-7f195107ce20'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821496 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c68b4db-9517-4776-878e-5cc78b8cffbb', 'scsi-SQEMU_QEMU_HARDDISK_7c68b4db-9517-4776-878e-5cc78b8cffbb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821505 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:59.821511 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.821516 | orchestrator | 2026-03-11 00:58:59.821522 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-11 00:58:59.821527 | orchestrator | Wednesday 11 March 2026 00:57:12 +0000 (0:00:00.617) 0:00:16.230 ******* 2026-03-11 00:58:59.821533 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:59.821539 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:59.821544 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:59.821549 | orchestrator | 2026-03-11 00:58:59.821555 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-11 00:58:59.821561 | orchestrator | Wednesday 11 March 2026 00:57:13 +0000 (0:00:00.651) 0:00:16.882 ******* 2026-03-11 00:58:59.821589 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:59.821595 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:59.821601 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:59.821607 | orchestrator | 2026-03-11 00:58:59.821613 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-11 00:58:59.821619 | orchestrator | Wednesday 11 March 2026 00:57:13 +0000 (0:00:00.509) 0:00:17.391 ******* 2026-03-11 00:58:59.821626 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:59.821632 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:59.821638 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:59.821644 | orchestrator | 2026-03-11 00:58:59.821650 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-11 00:58:59.821656 | orchestrator | Wednesday 11 March 2026 00:57:14 +0000 (0:00:00.652) 0:00:18.043 ******* 2026-03-11 00:58:59.821662 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.821668 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.821674 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.821681 | orchestrator | 2026-03-11 00:58:59.821687 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-11 00:58:59.821698 | orchestrator | Wednesday 11 March 2026 00:57:14 +0000 (0:00:00.290) 0:00:18.334 ******* 2026-03-11 00:58:59.821704 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.821710 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.821717 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.821723 | orchestrator | 2026-03-11 00:58:59.821729 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-11 00:58:59.821735 | orchestrator | Wednesday 11 March 2026 00:57:15 +0000 (0:00:00.455) 0:00:18.790 ******* 2026-03-11 00:58:59.821742 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.821748 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.821754 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.821759 | orchestrator | 2026-03-11 00:58:59.821765 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-11 00:58:59.821770 | orchestrator | Wednesday 11 March 2026 00:57:15 +0000 (0:00:00.602) 0:00:19.392 ******* 2026-03-11 00:58:59.821776 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-11 00:58:59.821782 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-11 00:58:59.821787 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-11 00:58:59.821793 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-11 00:58:59.821798 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-11 00:58:59.821803 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-11 00:58:59.821809 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-11 00:58:59.821814 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-11 00:58:59.821819 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-11 00:58:59.821825 | orchestrator | 2026-03-11 00:58:59.821830 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-11 00:58:59.821836 | orchestrator | Wednesday 11 March 2026 00:57:16 +0000 (0:00:00.877) 0:00:20.270 ******* 2026-03-11 00:58:59.821841 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-11 00:58:59.821851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-11 00:58:59.821857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-11 00:58:59.821862 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.821868 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-11 00:58:59.821873 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-11 00:58:59.821878 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-11 00:58:59.821884 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.821889 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-11 00:58:59.821894 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-11 00:58:59.821899 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-11 00:58:59.821905 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.821910 | orchestrator | 2026-03-11 00:58:59.821916 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-11 00:58:59.821921 | orchestrator | Wednesday 11 March 2026 00:57:17 +0000 (0:00:00.364) 0:00:20.635 ******* 2026-03-11 00:58:59.821927 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:58:59.821933 | orchestrator | 2026-03-11 00:58:59.821938 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-11 00:58:59.821945 | orchestrator | Wednesday 11 March 2026 00:57:17 +0000 (0:00:00.717) 0:00:21.352 ******* 2026-03-11 00:58:59.821953 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.821959 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.821964 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.821970 | orchestrator | 2026-03-11 00:58:59.821975 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-11 00:58:59.821984 | orchestrator | Wednesday 11 March 2026 00:57:18 +0000 (0:00:00.315) 0:00:21.668 ******* 2026-03-11 00:58:59.821990 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.821995 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.822000 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.822005 | orchestrator | 2026-03-11 00:58:59.822011 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-11 00:58:59.822052 | orchestrator | Wednesday 11 March 2026 00:57:18 +0000 (0:00:00.303) 0:00:21.971 ******* 2026-03-11 00:58:59.822057 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.822063 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.822068 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:59.822073 | orchestrator | 2026-03-11 00:58:59.822079 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-11 00:58:59.822084 | orchestrator | Wednesday 11 March 2026 00:57:18 +0000 (0:00:00.302) 0:00:22.273 ******* 2026-03-11 00:58:59.822089 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:59.822095 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:59.822100 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:59.822114 | orchestrator | 2026-03-11 00:58:59.822120 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-11 00:58:59.822125 | orchestrator | Wednesday 11 March 2026 00:57:19 +0000 (0:00:00.932) 0:00:23.206 ******* 2026-03-11 00:58:59.822138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:58:59.822143 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:58:59.822149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:58:59.822154 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.822160 | orchestrator | 2026-03-11 00:58:59.822165 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-11 00:58:59.822170 | orchestrator | Wednesday 11 March 2026 00:57:20 +0000 (0:00:00.374) 0:00:23.580 ******* 2026-03-11 00:58:59.822176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:58:59.822181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:58:59.822186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:58:59.822192 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.822197 | orchestrator | 2026-03-11 00:58:59.822202 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-11 00:58:59.822207 | orchestrator | Wednesday 11 March 2026 00:57:20 +0000 (0:00:00.401) 0:00:23.981 ******* 2026-03-11 00:58:59.822213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:58:59.822218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:58:59.822223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:58:59.822229 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.822234 | orchestrator | 2026-03-11 00:58:59.822240 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-11 00:58:59.822245 | orchestrator | Wednesday 11 March 2026 00:57:20 +0000 (0:00:00.371) 0:00:24.353 ******* 2026-03-11 00:58:59.822250 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:59.822255 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:59.822261 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:59.822266 | orchestrator | 2026-03-11 00:58:59.822274 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-11 00:58:59.822282 | orchestrator | Wednesday 11 March 2026 00:57:21 +0000 (0:00:00.327) 0:00:24.681 ******* 2026-03-11 00:58:59.822290 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-11 00:58:59.822299 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-11 00:58:59.822307 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-11 00:58:59.822316 | orchestrator | 2026-03-11 00:58:59.822324 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-11 00:58:59.822332 | orchestrator | Wednesday 11 March 2026 00:57:21 +0000 (0:00:00.478) 0:00:25.160 ******* 2026-03-11 00:58:59.822346 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:58:59.822359 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:58:59.822367 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:58:59.822375 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-11 00:58:59.822383 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-11 00:58:59.822392 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-11 00:58:59.822401 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-11 00:58:59.822411 | orchestrator | 2026-03-11 00:58:59.822416 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-11 00:58:59.822422 | orchestrator | Wednesday 11 March 2026 00:57:22 +0000 (0:00:01.027) 0:00:26.187 ******* 2026-03-11 00:58:59.822427 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:58:59.822432 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:58:59.822438 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:58:59.822443 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-11 00:58:59.822448 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-11 00:58:59.822453 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-11 00:58:59.822463 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-11 00:58:59.822469 | orchestrator | 2026-03-11 00:58:59.822474 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-11 00:58:59.822479 | orchestrator | Wednesday 11 March 2026 00:57:24 +0000 (0:00:02.031) 0:00:28.218 ******* 2026-03-11 00:58:59.822485 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:59.822490 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:59.822495 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-11 00:58:59.822501 | orchestrator | 2026-03-11 00:58:59.822506 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-11 00:58:59.822511 | orchestrator | Wednesday 11 March 2026 00:57:25 +0000 (0:00:00.375) 0:00:28.594 ******* 2026-03-11 00:58:59.822517 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-11 00:58:59.822523 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-11 00:58:59.822529 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-11 00:58:59.822534 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-11 00:58:59.822540 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-11 00:58:59.822550 | orchestrator | 2026-03-11 00:58:59.822555 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-11 00:58:59.822561 | orchestrator | Wednesday 11 March 2026 00:58:10 +0000 (0:00:45.657) 0:01:14.251 ******* 2026-03-11 00:58:59.822589 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822594 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822600 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822605 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822611 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822616 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822622 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-11 00:58:59.822627 | orchestrator | 2026-03-11 00:58:59.822632 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-11 00:58:59.822638 | orchestrator | Wednesday 11 March 2026 00:58:31 +0000 (0:00:20.584) 0:01:34.835 ******* 2026-03-11 00:58:59.822647 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822653 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822658 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822663 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822669 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822674 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822679 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-11 00:58:59.822685 | orchestrator | 2026-03-11 00:58:59.822690 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-11 00:58:59.822696 | orchestrator | Wednesday 11 March 2026 00:58:42 +0000 (0:00:10.898) 0:01:45.734 ******* 2026-03-11 00:58:59.822701 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822707 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:58:59.822712 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:58:59.822717 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822723 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:58:59.822731 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:58:59.822737 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822742 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:58:59.822748 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:58:59.822753 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822758 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:58:59.822764 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:58:59.822769 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822774 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:58:59.822779 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:58:59.822790 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:59.822795 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:58:59.822800 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:58:59.822806 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-11 00:58:59.822811 | orchestrator | 2026-03-11 00:58:59.822817 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:58:59.822822 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-11 00:58:59.822829 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-11 00:58:59.822834 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-11 00:58:59.822840 | orchestrator | 2026-03-11 00:58:59.822845 | orchestrator | 2026-03-11 00:58:59.822850 | orchestrator | 2026-03-11 00:58:59.822856 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:58:59.822861 | orchestrator | Wednesday 11 March 2026 00:58:58 +0000 (0:00:16.328) 0:02:02.063 ******* 2026-03-11 00:58:59.822866 | orchestrator | =============================================================================== 2026-03-11 00:58:59.822872 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.66s 2026-03-11 00:58:59.822877 | orchestrator | generate keys ---------------------------------------------------------- 20.58s 2026-03-11 00:58:59.822882 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.33s 2026-03-11 00:58:59.822887 | orchestrator | get keys from monitors ------------------------------------------------- 10.90s 2026-03-11 00:58:59.822893 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.03s 2026-03-11 00:58:59.822898 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.97s 2026-03-11 00:58:59.822903 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.64s 2026-03-11 00:58:59.822909 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.03s 2026-03-11 00:58:59.822914 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.93s 2026-03-11 00:58:59.822920 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.88s 2026-03-11 00:58:59.822925 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.80s 2026-03-11 00:58:59.822930 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.77s 2026-03-11 00:58:59.822940 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.72s 2026-03-11 00:58:59.822945 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2026-03-11 00:58:59.822950 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.65s 2026-03-11 00:58:59.822956 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.62s 2026-03-11 00:58:59.822961 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.61s 2026-03-11 00:58:59.822966 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.60s 2026-03-11 00:58:59.822972 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.59s 2026-03-11 00:58:59.822977 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.57s 2026-03-11 00:58:59.822982 | orchestrator | 2026-03-11 00:58:59 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:58:59.822988 | orchestrator | 2026-03-11 00:58:59 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:58:59.822997 | orchestrator | 2026-03-11 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:02.878381 | orchestrator | 2026-03-11 00:59:02 | INFO  | Task 5ffa2081-4f70-46e7-802a-08b593807c70 is in state STARTED 2026-03-11 00:59:02.880898 | orchestrator | 2026-03-11 00:59:02 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:02.887786 | orchestrator | 2026-03-11 00:59:02 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:59:02.887884 | orchestrator | 2026-03-11 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:05.929532 | orchestrator | 2026-03-11 00:59:05 | INFO  | Task 5ffa2081-4f70-46e7-802a-08b593807c70 is in state STARTED 2026-03-11 00:59:05.932036 | orchestrator | 2026-03-11 00:59:05 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:05.933706 | orchestrator | 2026-03-11 00:59:05 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:59:05.933749 | orchestrator | 2026-03-11 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:08.983988 | orchestrator | 2026-03-11 00:59:08 | INFO  | Task 5ffa2081-4f70-46e7-802a-08b593807c70 is in state STARTED 2026-03-11 00:59:08.986242 | orchestrator | 2026-03-11 00:59:08 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:08.988957 | orchestrator | 2026-03-11 00:59:08 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:59:08.989007 | orchestrator | 2026-03-11 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:12.032888 | orchestrator | 2026-03-11 00:59:12 | INFO  | Task 5ffa2081-4f70-46e7-802a-08b593807c70 is in state STARTED 2026-03-11 00:59:12.032983 | orchestrator | 2026-03-11 00:59:12 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:12.034163 | orchestrator | 2026-03-11 00:59:12 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:59:12.035830 | orchestrator | 2026-03-11 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:15.075964 | orchestrator | 2026-03-11 00:59:15 | INFO  | Task 5ffa2081-4f70-46e7-802a-08b593807c70 is in state STARTED 2026-03-11 00:59:15.077615 | orchestrator | 2026-03-11 00:59:15 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:15.079299 | orchestrator | 2026-03-11 00:59:15 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:59:15.079373 | orchestrator | 2026-03-11 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:18.130816 | orchestrator | 2026-03-11 00:59:18 | INFO  | Task 5ffa2081-4f70-46e7-802a-08b593807c70 is in state STARTED 2026-03-11 00:59:18.133255 | orchestrator | 2026-03-11 00:59:18 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:18.135662 | orchestrator | 2026-03-11 00:59:18 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:59:18.135861 | orchestrator | 2026-03-11 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:21.188713 | orchestrator | 2026-03-11 00:59:21 | INFO  | Task 5ffa2081-4f70-46e7-802a-08b593807c70 is in state STARTED 2026-03-11 00:59:21.191048 | orchestrator | 2026-03-11 00:59:21 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:21.193564 | orchestrator | 2026-03-11 00:59:21 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:59:21.193837 | orchestrator | 2026-03-11 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:24.241930 | orchestrator | 2026-03-11 00:59:24 | INFO  | Task 5ffa2081-4f70-46e7-802a-08b593807c70 is in state STARTED 2026-03-11 00:59:24.243218 | orchestrator | 2026-03-11 00:59:24 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:24.246816 | orchestrator | 2026-03-11 00:59:24 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:59:24.248439 | orchestrator | 2026-03-11 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:27.289184 | orchestrator | 2026-03-11 00:59:27 | INFO  | Task 5ffa2081-4f70-46e7-802a-08b593807c70 is in state STARTED 2026-03-11 00:59:27.291351 | orchestrator | 2026-03-11 00:59:27 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:27.293006 | orchestrator | 2026-03-11 00:59:27 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state STARTED 2026-03-11 00:59:27.293056 | orchestrator | 2026-03-11 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:30.336804 | orchestrator | 2026-03-11 00:59:30 | INFO  | Task 5ffa2081-4f70-46e7-802a-08b593807c70 is in state STARTED 2026-03-11 00:59:30.343833 | orchestrator | 2026-03-11 00:59:30.343913 | orchestrator | 2026-03-11 00:59:30.343921 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:59:30.343927 | orchestrator | 2026-03-11 00:59:30.344016 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:59:30.344024 | orchestrator | Wednesday 11 March 2026 00:57:57 +0000 (0:00:00.257) 0:00:00.257 ******* 2026-03-11 00:59:30.344030 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:59:30.344145 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:59:30.344151 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:59:30.344155 | orchestrator | 2026-03-11 00:59:30.344159 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:59:30.344164 | orchestrator | Wednesday 11 March 2026 00:57:57 +0000 (0:00:00.278) 0:00:00.535 ******* 2026-03-11 00:59:30.344171 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-11 00:59:30.344178 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-11 00:59:30.344184 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-11 00:59:30.344190 | orchestrator | 2026-03-11 00:59:30.344196 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-11 00:59:30.344203 | orchestrator | 2026-03-11 00:59:30.344209 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-11 00:59:30.344216 | orchestrator | Wednesday 11 March 2026 00:57:57 +0000 (0:00:00.430) 0:00:00.966 ******* 2026-03-11 00:59:30.344223 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:59:30.344230 | orchestrator | 2026-03-11 00:59:30.344236 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-11 00:59:30.344242 | orchestrator | Wednesday 11 March 2026 00:57:58 +0000 (0:00:00.519) 0:00:01.485 ******* 2026-03-11 00:59:30.344266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:59:30.344308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:59:30.344321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:59:30.344334 | orchestrator | 2026-03-11 00:59:30.344340 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-11 00:59:30.344346 | orchestrator | Wednesday 11 March 2026 00:57:59 +0000 (0:00:01.115) 0:00:02.601 ******* 2026-03-11 00:59:30.344352 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:59:30.344358 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:59:30.344364 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:59:30.344370 | orchestrator | 2026-03-11 00:59:30.344376 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-11 00:59:30.344383 | orchestrator | Wednesday 11 March 2026 00:57:59 +0000 (0:00:00.448) 0:00:03.049 ******* 2026-03-11 00:59:30.344394 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-11 00:59:30.344400 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-11 00:59:30.344406 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-11 00:59:30.344413 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-11 00:59:30.344424 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-11 00:59:30.344430 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-11 00:59:30.344436 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-11 00:59:30.344441 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-11 00:59:30.344447 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-11 00:59:30.344453 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-11 00:59:30.344459 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-11 00:59:30.344465 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-11 00:59:30.344471 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-11 00:59:30.344477 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-11 00:59:30.344483 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-11 00:59:30.344488 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-11 00:59:30.344499 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-11 00:59:30.344504 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-11 00:59:30.344510 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-11 00:59:30.344516 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-11 00:59:30.344522 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-11 00:59:30.344528 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-11 00:59:30.344545 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-11 00:59:30.344551 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-11 00:59:30.344558 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-11 00:59:30.344566 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-11 00:59:30.344572 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-11 00:59:30.344579 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-11 00:59:30.344586 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-11 00:59:30.344592 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-11 00:59:30.344601 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-11 00:59:30.344607 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-11 00:59:30.344613 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-11 00:59:30.344620 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-11 00:59:30.344626 | orchestrator | 2026-03-11 00:59:30.344633 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:59:30.344638 | orchestrator | Wednesday 11 March 2026 00:58:00 +0000 (0:00:00.729) 0:00:03.779 ******* 2026-03-11 00:59:30.344644 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:59:30.344650 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:59:30.344656 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:59:30.344662 | orchestrator | 2026-03-11 00:59:30.344668 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:59:30.344674 | orchestrator | Wednesday 11 March 2026 00:58:01 +0000 (0:00:00.306) 0:00:04.086 ******* 2026-03-11 00:59:30.344683 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.344690 | orchestrator | 2026-03-11 00:59:30.344696 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:59:30.344701 | orchestrator | Wednesday 11 March 2026 00:58:01 +0000 (0:00:00.139) 0:00:04.225 ******* 2026-03-11 00:59:30.344707 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.344713 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:59:30.344724 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:59:30.344730 | orchestrator | 2026-03-11 00:59:30.344736 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:59:30.344742 | orchestrator | Wednesday 11 March 2026 00:58:01 +0000 (0:00:00.462) 0:00:04.688 ******* 2026-03-11 00:59:30.344749 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:59:30.344755 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:59:30.344761 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:59:30.344765 | orchestrator | 2026-03-11 00:59:30.344770 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:59:30.344777 | orchestrator | Wednesday 11 March 2026 00:58:01 +0000 (0:00:00.329) 0:00:05.018 ******* 2026-03-11 00:59:30.344783 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.344790 | orchestrator | 2026-03-11 00:59:30.344798 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:59:30.344802 | orchestrator | Wednesday 11 March 2026 00:58:02 +0000 (0:00:00.116) 0:00:05.135 ******* 2026-03-11 00:59:30.344807 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.344811 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:59:30.344815 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:59:30.344820 | orchestrator | 2026-03-11 00:59:30.344824 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:59:30.344829 | orchestrator | Wednesday 11 March 2026 00:58:02 +0000 (0:00:00.271) 0:00:05.406 ******* 2026-03-11 00:59:30.344833 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:59:30.344837 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:59:30.344842 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:59:30.344846 | orchestrator | 2026-03-11 00:59:30.344851 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:59:30.344855 | orchestrator | Wednesday 11 March 2026 00:58:02 +0000 (0:00:00.335) 0:00:05.742 ******* 2026-03-11 00:59:30.344859 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.344863 | orchestrator | 2026-03-11 00:59:30.344869 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:59:30.344875 | orchestrator | Wednesday 11 March 2026 00:58:02 +0000 (0:00:00.306) 0:00:06.049 ******* 2026-03-11 00:59:30.344882 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.344888 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:59:30.344896 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:59:30.344903 | orchestrator | 2026-03-11 00:59:30.344910 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:59:30.344916 | orchestrator | Wednesday 11 March 2026 00:58:03 +0000 (0:00:00.290) 0:00:06.339 ******* 2026-03-11 00:59:30.344922 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:59:30.344928 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:59:30.344935 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:59:30.344940 | orchestrator | 2026-03-11 00:59:30.344945 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:59:30.344949 | orchestrator | Wednesday 11 March 2026 00:58:03 +0000 (0:00:00.331) 0:00:06.671 ******* 2026-03-11 00:59:30.344954 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.344958 | orchestrator | 2026-03-11 00:59:30.344963 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:59:30.344967 | orchestrator | Wednesday 11 March 2026 00:58:03 +0000 (0:00:00.122) 0:00:06.793 ******* 2026-03-11 00:59:30.344972 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.344976 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:59:30.344980 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:59:30.344985 | orchestrator | 2026-03-11 00:59:30.344989 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:59:30.344993 | orchestrator | Wednesday 11 March 2026 00:58:04 +0000 (0:00:00.271) 0:00:07.065 ******* 2026-03-11 00:59:30.344997 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:59:30.345002 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:59:30.345006 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:59:30.345015 | orchestrator | 2026-03-11 00:59:30.345019 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:59:30.345023 | orchestrator | Wednesday 11 March 2026 00:58:04 +0000 (0:00:00.498) 0:00:07.564 ******* 2026-03-11 00:59:30.345028 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345032 | orchestrator | 2026-03-11 00:59:30.345039 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:59:30.345044 | orchestrator | Wednesday 11 March 2026 00:58:04 +0000 (0:00:00.137) 0:00:07.701 ******* 2026-03-11 00:59:30.345048 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345052 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:59:30.345057 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:59:30.345061 | orchestrator | 2026-03-11 00:59:30.345066 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:59:30.345070 | orchestrator | Wednesday 11 March 2026 00:58:04 +0000 (0:00:00.280) 0:00:07.982 ******* 2026-03-11 00:59:30.345074 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:59:30.345079 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:59:30.345083 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:59:30.345088 | orchestrator | 2026-03-11 00:59:30.345094 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:59:30.345101 | orchestrator | Wednesday 11 March 2026 00:58:05 +0000 (0:00:00.287) 0:00:08.269 ******* 2026-03-11 00:59:30.345108 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345114 | orchestrator | 2026-03-11 00:59:30.345120 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:59:30.345125 | orchestrator | Wednesday 11 March 2026 00:58:05 +0000 (0:00:00.145) 0:00:08.415 ******* 2026-03-11 00:59:30.345130 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345134 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:59:30.345138 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:59:30.345143 | orchestrator | 2026-03-11 00:59:30.345147 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:59:30.345156 | orchestrator | Wednesday 11 March 2026 00:58:05 +0000 (0:00:00.305) 0:00:08.720 ******* 2026-03-11 00:59:30.345160 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:59:30.345163 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:59:30.345167 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:59:30.345171 | orchestrator | 2026-03-11 00:59:30.345175 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:59:30.345178 | orchestrator | Wednesday 11 March 2026 00:58:06 +0000 (0:00:00.494) 0:00:09.215 ******* 2026-03-11 00:59:30.345182 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345186 | orchestrator | 2026-03-11 00:59:30.345189 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:59:30.345193 | orchestrator | Wednesday 11 March 2026 00:58:06 +0000 (0:00:00.154) 0:00:09.369 ******* 2026-03-11 00:59:30.345197 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345201 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:59:30.345204 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:59:30.345208 | orchestrator | 2026-03-11 00:59:30.345212 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:59:30.345216 | orchestrator | Wednesday 11 March 2026 00:58:06 +0000 (0:00:00.289) 0:00:09.659 ******* 2026-03-11 00:59:30.345221 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:59:30.345227 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:59:30.345237 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:59:30.345244 | orchestrator | 2026-03-11 00:59:30.345250 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:59:30.345256 | orchestrator | Wednesday 11 March 2026 00:58:06 +0000 (0:00:00.334) 0:00:09.994 ******* 2026-03-11 00:59:30.345263 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345269 | orchestrator | 2026-03-11 00:59:30.345275 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:59:30.345282 | orchestrator | Wednesday 11 March 2026 00:58:07 +0000 (0:00:00.132) 0:00:10.126 ******* 2026-03-11 00:59:30.345297 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345303 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:59:30.345310 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:59:30.345315 | orchestrator | 2026-03-11 00:59:30.345321 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:59:30.345327 | orchestrator | Wednesday 11 March 2026 00:58:07 +0000 (0:00:00.508) 0:00:10.635 ******* 2026-03-11 00:59:30.345333 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:59:30.345339 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:59:30.345345 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:59:30.345351 | orchestrator | 2026-03-11 00:59:30.345360 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:59:30.345366 | orchestrator | Wednesday 11 March 2026 00:58:07 +0000 (0:00:00.310) 0:00:10.945 ******* 2026-03-11 00:59:30.345372 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345378 | orchestrator | 2026-03-11 00:59:30.345384 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:59:30.345390 | orchestrator | Wednesday 11 March 2026 00:58:08 +0000 (0:00:00.139) 0:00:11.085 ******* 2026-03-11 00:59:30.345396 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345402 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:59:30.345408 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:59:30.345415 | orchestrator | 2026-03-11 00:59:30.345421 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:59:30.345427 | orchestrator | Wednesday 11 March 2026 00:58:08 +0000 (0:00:00.365) 0:00:11.451 ******* 2026-03-11 00:59:30.345433 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:59:30.345439 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:59:30.345445 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:59:30.345451 | orchestrator | 2026-03-11 00:59:30.345458 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:59:30.345464 | orchestrator | Wednesday 11 March 2026 00:58:08 +0000 (0:00:00.299) 0:00:11.751 ******* 2026-03-11 00:59:30.345470 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345476 | orchestrator | 2026-03-11 00:59:30.345482 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:59:30.345488 | orchestrator | Wednesday 11 March 2026 00:58:08 +0000 (0:00:00.133) 0:00:11.885 ******* 2026-03-11 00:59:30.345495 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345501 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:59:30.345507 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:59:30.345513 | orchestrator | 2026-03-11 00:59:30.345519 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-11 00:59:30.345526 | orchestrator | Wednesday 11 March 2026 00:58:09 +0000 (0:00:00.479) 0:00:12.365 ******* 2026-03-11 00:59:30.345552 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:59:30.345560 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:59:30.345566 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:59:30.345572 | orchestrator | 2026-03-11 00:59:30.345578 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-11 00:59:30.345584 | orchestrator | Wednesday 11 March 2026 00:58:10 +0000 (0:00:01.577) 0:00:13.942 ******* 2026-03-11 00:59:30.345590 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-11 00:59:30.345596 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-11 00:59:30.345602 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-11 00:59:30.345608 | orchestrator | 2026-03-11 00:59:30.345614 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-11 00:59:30.345620 | orchestrator | Wednesday 11 March 2026 00:58:12 +0000 (0:00:01.844) 0:00:15.786 ******* 2026-03-11 00:59:30.345627 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-11 00:59:30.345638 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-11 00:59:30.345644 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-11 00:59:30.345651 | orchestrator | 2026-03-11 00:59:30.345661 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-11 00:59:30.345667 | orchestrator | Wednesday 11 March 2026 00:58:15 +0000 (0:00:02.442) 0:00:18.229 ******* 2026-03-11 00:59:30.345673 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-11 00:59:30.345679 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-11 00:59:30.345686 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-11 00:59:30.345692 | orchestrator | 2026-03-11 00:59:30.345698 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-11 00:59:30.345704 | orchestrator | Wednesday 11 March 2026 00:58:17 +0000 (0:00:02.394) 0:00:20.624 ******* 2026-03-11 00:59:30.345710 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345716 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:59:30.345722 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:59:30.345728 | orchestrator | 2026-03-11 00:59:30.345734 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-11 00:59:30.345740 | orchestrator | Wednesday 11 March 2026 00:58:17 +0000 (0:00:00.319) 0:00:20.943 ******* 2026-03-11 00:59:30.345746 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345752 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:59:30.345759 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:59:30.345765 | orchestrator | 2026-03-11 00:59:30.345771 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-11 00:59:30.345777 | orchestrator | Wednesday 11 March 2026 00:58:18 +0000 (0:00:00.309) 0:00:21.252 ******* 2026-03-11 00:59:30.345783 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:59:30.345790 | orchestrator | 2026-03-11 00:59:30.345796 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-11 00:59:30.345802 | orchestrator | Wednesday 11 March 2026 00:58:18 +0000 (0:00:00.729) 0:00:21.982 ******* 2026-03-11 00:59:30.345814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:59:30.345832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:59:30.345848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:59:30.345858 | orchestrator | 2026-03-11 00:59:30.345865 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-11 00:59:30.345871 | orchestrator | Wednesday 11 March 2026 00:58:20 +0000 (0:00:01.275) 0:00:23.258 ******* 2026-03-11 00:59:30.345882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:59:30.345890 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:59:30.345914 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:59:30.345922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:59:30.345928 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:59:30.345935 | orchestrator | 2026-03-11 00:59:30.345941 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-11 00:59:30.345947 | orchestrator | Wednesday 11 March 2026 00:58:20 +0000 (0:00:00.661) 0:00:23.919 ******* 2026-03-11 00:59:30.345965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:59:30.345972 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.345978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:59:30.345989 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:59:30.346002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:59:30.346009 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:59:30.346049 | orchestrator | 2026-03-11 00:59:30.346055 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-11 00:59:30.346061 | orchestrator | Wednesday 11 March 2026 00:58:21 +0000 (0:00:00.794) 0:00:24.714 ******* 2026-03-11 00:59:30.346072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:59:30.346091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:59:30.346102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:59:30.346114 | orchestrator | 2026-03-11 00:59:30.346120 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-11 00:59:30.346127 | orchestrator | Wednesday 11 March 2026 00:58:23 +0000 (0:00:01.596) 0:00:26.311 ******* 2026-03-11 00:59:30.346133 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:59:30.346139 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:59:30.346146 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:59:30.346152 | orchestrator | 2026-03-11 00:59:30.346158 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-11 00:59:30.346169 | orchestrator | Wednesday 11 March 2026 00:58:23 +0000 (0:00:00.320) 0:00:26.631 ******* 2026-03-11 00:59:30.346175 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:59:30.346182 | orchestrator | 2026-03-11 00:59:30.346189 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-11 00:59:30.346195 | orchestrator | Wednesday 11 March 2026 00:58:24 +0000 (0:00:00.524) 0:00:27.155 ******* 2026-03-11 00:59:30.346201 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:59:30.346207 | orchestrator | 2026-03-11 00:59:30.346214 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-11 00:59:30.346220 | orchestrator | Wednesday 11 March 2026 00:58:26 +0000 (0:00:02.265) 0:00:29.420 ******* 2026-03-11 00:59:30.346227 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:59:30.346233 | orchestrator | 2026-03-11 00:59:30.346240 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-11 00:59:30.346246 | orchestrator | Wednesday 11 March 2026 00:58:29 +0000 (0:00:02.653) 0:00:32.074 ******* 2026-03-11 00:59:30.346252 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:59:30.346259 | orchestrator | 2026-03-11 00:59:30.346265 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-11 00:59:30.346271 | orchestrator | Wednesday 11 March 2026 00:58:44 +0000 (0:00:15.324) 0:00:47.399 ******* 2026-03-11 00:59:30.346278 | orchestrator | 2026-03-11 00:59:30.346284 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-11 00:59:30.346291 | orchestrator | Wednesday 11 March 2026 00:58:44 +0000 (0:00:00.067) 0:00:47.466 ******* 2026-03-11 00:59:30.346297 | orchestrator | 2026-03-11 00:59:30.346303 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-11 00:59:30.346309 | orchestrator | Wednesday 11 March 2026 00:58:44 +0000 (0:00:00.072) 0:00:47.538 ******* 2026-03-11 00:59:30.346315 | orchestrator | 2026-03-11 00:59:30.346322 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-11 00:59:30.346334 | orchestrator | Wednesday 11 March 2026 00:58:44 +0000 (0:00:00.067) 0:00:47.606 ******* 2026-03-11 00:59:30.346340 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:59:30.346347 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:59:30.346353 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:59:30.346360 | orchestrator | 2026-03-11 00:59:30.346367 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:59:30.346373 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-11 00:59:30.346381 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-11 00:59:30.346387 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-11 00:59:30.346394 | orchestrator | 2026-03-11 00:59:30.346400 | orchestrator | 2026-03-11 00:59:30.346406 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:59:30.346413 | orchestrator | Wednesday 11 March 2026 00:59:28 +0000 (0:00:43.595) 0:01:31.202 ******* 2026-03-11 00:59:30.346419 | orchestrator | =============================================================================== 2026-03-11 00:59:30.346425 | orchestrator | horizon : Restart horizon container ------------------------------------ 43.60s 2026-03-11 00:59:30.346432 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.32s 2026-03-11 00:59:30.346438 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.65s 2026-03-11 00:59:30.346444 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.44s 2026-03-11 00:59:30.346450 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.40s 2026-03-11 00:59:30.346457 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.27s 2026-03-11 00:59:30.346464 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.84s 2026-03-11 00:59:30.346470 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.60s 2026-03-11 00:59:30.346477 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.58s 2026-03-11 00:59:30.346487 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.28s 2026-03-11 00:59:30.346494 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.12s 2026-03-11 00:59:30.346501 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.79s 2026-03-11 00:59:30.346507 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2026-03-11 00:59:30.346513 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2026-03-11 00:59:30.346519 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.66s 2026-03-11 00:59:30.346526 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-03-11 00:59:30.346640 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-03-11 00:59:30.346650 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2026-03-11 00:59:30.346657 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2026-03-11 00:59:30.346663 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2026-03-11 00:59:30.346669 | orchestrator | 2026-03-11 00:59:30 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:30.346680 | orchestrator | 2026-03-11 00:59:30 | INFO  | Task 20366898-1b79-409b-a2ff-9ab4b2800c3a is in state SUCCESS 2026-03-11 00:59:30.346686 | orchestrator | 2026-03-11 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:33.379183 | orchestrator | 2026-03-11 00:59:33 | INFO  | Task 5ffa2081-4f70-46e7-802a-08b593807c70 is in state STARTED 2026-03-11 00:59:33.381378 | orchestrator | 2026-03-11 00:59:33 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:33.381426 | orchestrator | 2026-03-11 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:36.432811 | orchestrator | 2026-03-11 00:59:36 | INFO  | Task 5ffa2081-4f70-46e7-802a-08b593807c70 is in state SUCCESS 2026-03-11 00:59:36.433885 | orchestrator | 2026-03-11 00:59:36 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:36.433963 | orchestrator | 2026-03-11 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:39.487852 | orchestrator | 2026-03-11 00:59:39 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 00:59:39.489861 | orchestrator | 2026-03-11 00:59:39 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:39.489907 | orchestrator | 2026-03-11 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:42.527178 | orchestrator | 2026-03-11 00:59:42 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 00:59:42.528853 | orchestrator | 2026-03-11 00:59:42 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:42.528897 | orchestrator | 2026-03-11 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:45.575801 | orchestrator | 2026-03-11 00:59:45 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 00:59:45.578087 | orchestrator | 2026-03-11 00:59:45 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:45.578567 | orchestrator | 2026-03-11 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:48.629731 | orchestrator | 2026-03-11 00:59:48 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 00:59:48.631271 | orchestrator | 2026-03-11 00:59:48 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:48.631318 | orchestrator | 2026-03-11 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:51.682304 | orchestrator | 2026-03-11 00:59:51 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 00:59:51.682909 | orchestrator | 2026-03-11 00:59:51 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:51.682961 | orchestrator | 2026-03-11 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:54.724833 | orchestrator | 2026-03-11 00:59:54 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 00:59:54.726072 | orchestrator | 2026-03-11 00:59:54 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:54.726129 | orchestrator | 2026-03-11 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:57.769140 | orchestrator | 2026-03-11 00:59:57 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 00:59:57.770943 | orchestrator | 2026-03-11 00:59:57 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 00:59:57.770985 | orchestrator | 2026-03-11 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:00.814248 | orchestrator | 2026-03-11 01:00:00 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 01:00:00.816448 | orchestrator | 2026-03-11 01:00:00 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 01:00:00.816637 | orchestrator | 2026-03-11 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:03.848828 | orchestrator | 2026-03-11 01:00:03 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 01:00:03.849047 | orchestrator | 2026-03-11 01:00:03 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 01:00:03.849118 | orchestrator | 2026-03-11 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:06.889186 | orchestrator | 2026-03-11 01:00:06 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 01:00:06.890819 | orchestrator | 2026-03-11 01:00:06 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 01:00:06.890857 | orchestrator | 2026-03-11 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:09.932671 | orchestrator | 2026-03-11 01:00:09 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 01:00:09.933968 | orchestrator | 2026-03-11 01:00:09 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 01:00:09.934038 | orchestrator | 2026-03-11 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:12.975442 | orchestrator | 2026-03-11 01:00:12 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 01:00:12.977051 | orchestrator | 2026-03-11 01:00:12 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 01:00:12.977097 | orchestrator | 2026-03-11 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:16.020531 | orchestrator | 2026-03-11 01:00:16 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 01:00:16.022139 | orchestrator | 2026-03-11 01:00:16 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 01:00:16.022183 | orchestrator | 2026-03-11 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:19.071978 | orchestrator | 2026-03-11 01:00:19 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 01:00:19.073821 | orchestrator | 2026-03-11 01:00:19 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 01:00:19.073872 | orchestrator | 2026-03-11 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:22.110237 | orchestrator | 2026-03-11 01:00:22 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 01:00:22.111717 | orchestrator | 2026-03-11 01:00:22 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 01:00:22.111794 | orchestrator | 2026-03-11 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:25.153759 | orchestrator | 2026-03-11 01:00:25 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 01:00:25.156190 | orchestrator | 2026-03-11 01:00:25 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 01:00:25.156258 | orchestrator | 2026-03-11 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:28.199947 | orchestrator | 2026-03-11 01:00:28 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 01:00:28.201504 | orchestrator | 2026-03-11 01:00:28 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 01:00:28.201543 | orchestrator | 2026-03-11 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:31.238718 | orchestrator | 2026-03-11 01:00:31 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state STARTED 2026-03-11 01:00:31.238819 | orchestrator | 2026-03-11 01:00:31 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state STARTED 2026-03-11 01:00:31.238854 | orchestrator | 2026-03-11 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:34.270576 | orchestrator | 2026-03-11 01:00:34 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:00:34.274157 | orchestrator | 2026-03-11 01:00:34 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:00:34.274555 | orchestrator | 2026-03-11 01:00:34 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:00:34.275234 | orchestrator | 2026-03-11 01:00:34 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:00:34.276325 | orchestrator | 2026-03-11 01:00:34 | INFO  | Task 61d80dec-7159-4778-9d55-af58c311fc0f is in state SUCCESS 2026-03-11 01:00:34.276574 | orchestrator | 2026-03-11 01:00:34.276584 | orchestrator | 2026-03-11 01:00:34.276590 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-11 01:00:34.276596 | orchestrator | 2026-03-11 01:00:34.276601 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-11 01:00:34.276605 | orchestrator | Wednesday 11 March 2026 00:59:03 +0000 (0:00:00.150) 0:00:00.150 ******* 2026-03-11 01:00:34.276610 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-11 01:00:34.276615 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:34.276620 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:34.276624 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-11 01:00:34.276628 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:34.276643 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-11 01:00:34.276648 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-11 01:00:34.276652 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-11 01:00:34.276665 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-11 01:00:34.276673 | orchestrator | 2026-03-11 01:00:34.276681 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-11 01:00:34.276688 | orchestrator | Wednesday 11 March 2026 00:59:08 +0000 (0:00:04.881) 0:00:05.031 ******* 2026-03-11 01:00:34.276695 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-11 01:00:34.276701 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:34.276708 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:34.276714 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-11 01:00:34.276720 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:34.276726 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-11 01:00:34.276733 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-11 01:00:34.276740 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-11 01:00:34.276747 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-11 01:00:34.276753 | orchestrator | 2026-03-11 01:00:34.276760 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-11 01:00:34.276802 | orchestrator | Wednesday 11 March 2026 00:59:12 +0000 (0:00:04.393) 0:00:09.425 ******* 2026-03-11 01:00:34.276808 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-11 01:00:34.276812 | orchestrator | 2026-03-11 01:00:34.276816 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-11 01:00:34.276820 | orchestrator | Wednesday 11 March 2026 00:59:13 +0000 (0:00:00.926) 0:00:10.352 ******* 2026-03-11 01:00:34.276824 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-11 01:00:34.276828 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:34.276832 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:34.276836 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-11 01:00:34.276841 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:34.276846 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-11 01:00:34.276852 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-11 01:00:34.276859 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-11 01:00:34.276865 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-11 01:00:34.276871 | orchestrator | 2026-03-11 01:00:34.276877 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-11 01:00:34.276884 | orchestrator | Wednesday 11 March 2026 00:59:25 +0000 (0:00:12.367) 0:00:22.720 ******* 2026-03-11 01:00:34.276889 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-11 01:00:34.276907 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-11 01:00:34.276914 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-11 01:00:34.276921 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-11 01:00:34.276937 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-11 01:00:34.276944 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-11 01:00:34.276951 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-11 01:00:34.276957 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-11 01:00:34.276964 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-11 01:00:34.276968 | orchestrator | 2026-03-11 01:00:34.276972 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-11 01:00:34.276976 | orchestrator | Wednesday 11 March 2026 00:59:28 +0000 (0:00:03.009) 0:00:25.729 ******* 2026-03-11 01:00:34.276981 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-11 01:00:34.276985 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:34.276989 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:34.276992 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-11 01:00:34.276996 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:34.277000 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-11 01:00:34.277004 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-11 01:00:34.277008 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-11 01:00:34.277017 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-11 01:00:34.277021 | orchestrator | 2026-03-11 01:00:34.277025 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:00:34.277028 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:00:34.277044 | orchestrator | 2026-03-11 01:00:34.277048 | orchestrator | 2026-03-11 01:00:34.277051 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:00:34.277055 | orchestrator | Wednesday 11 March 2026 00:59:35 +0000 (0:00:06.467) 0:00:32.197 ******* 2026-03-11 01:00:34.277059 | orchestrator | =============================================================================== 2026-03-11 01:00:34.277063 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.37s 2026-03-11 01:00:34.277075 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.47s 2026-03-11 01:00:34.277078 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.88s 2026-03-11 01:00:34.277082 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.39s 2026-03-11 01:00:34.277086 | orchestrator | Check if target directories exist --------------------------------------- 3.01s 2026-03-11 01:00:34.277095 | orchestrator | Create share directory -------------------------------------------------- 0.93s 2026-03-11 01:00:34.277099 | orchestrator | 2026-03-11 01:00:34.277103 | orchestrator | 2026-03-11 01:00:34.277107 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-11 01:00:34.277110 | orchestrator | 2026-03-11 01:00:34.277114 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-11 01:00:34.277118 | orchestrator | Wednesday 11 March 2026 00:59:39 +0000 (0:00:00.221) 0:00:00.221 ******* 2026-03-11 01:00:34.277122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-11 01:00:34.277127 | orchestrator | 2026-03-11 01:00:34.277131 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-11 01:00:34.277134 | orchestrator | Wednesday 11 March 2026 00:59:39 +0000 (0:00:00.222) 0:00:00.443 ******* 2026-03-11 01:00:34.277138 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-11 01:00:34.277142 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-11 01:00:34.277146 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-11 01:00:34.277150 | orchestrator | 2026-03-11 01:00:34.277154 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-11 01:00:34.277157 | orchestrator | Wednesday 11 March 2026 00:59:41 +0000 (0:00:01.254) 0:00:01.698 ******* 2026-03-11 01:00:34.277162 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-11 01:00:34.277166 | orchestrator | 2026-03-11 01:00:34.277171 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-11 01:00:34.277176 | orchestrator | Wednesday 11 March 2026 00:59:42 +0000 (0:00:01.367) 0:00:03.065 ******* 2026-03-11 01:00:34.277183 | orchestrator | changed: [testbed-manager] 2026-03-11 01:00:34.277189 | orchestrator | 2026-03-11 01:00:34.277196 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-11 01:00:34.277202 | orchestrator | Wednesday 11 March 2026 00:59:43 +0000 (0:00:00.880) 0:00:03.945 ******* 2026-03-11 01:00:34.277208 | orchestrator | changed: [testbed-manager] 2026-03-11 01:00:34.277214 | orchestrator | 2026-03-11 01:00:34.277221 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-11 01:00:34.277227 | orchestrator | Wednesday 11 March 2026 00:59:44 +0000 (0:00:00.860) 0:00:04.806 ******* 2026-03-11 01:00:34.277234 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-11 01:00:34.277241 | orchestrator | ok: [testbed-manager] 2026-03-11 01:00:34.277248 | orchestrator | 2026-03-11 01:00:34.277260 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-11 01:00:34.277272 | orchestrator | Wednesday 11 March 2026 01:00:24 +0000 (0:00:40.381) 0:00:45.188 ******* 2026-03-11 01:00:34.277280 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-11 01:00:34.277286 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-11 01:00:34.277290 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-11 01:00:34.277295 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-11 01:00:34.277300 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-11 01:00:34.277304 | orchestrator | 2026-03-11 01:00:34.277309 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-11 01:00:34.277314 | orchestrator | Wednesday 11 March 2026 01:00:28 +0000 (0:00:03.554) 0:00:48.742 ******* 2026-03-11 01:00:34.277318 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-11 01:00:34.277323 | orchestrator | 2026-03-11 01:00:34.277327 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-11 01:00:34.277332 | orchestrator | Wednesday 11 March 2026 01:00:28 +0000 (0:00:00.429) 0:00:49.172 ******* 2026-03-11 01:00:34.277336 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:00:34.277340 | orchestrator | 2026-03-11 01:00:34.277345 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-11 01:00:34.277349 | orchestrator | Wednesday 11 March 2026 01:00:28 +0000 (0:00:00.120) 0:00:49.292 ******* 2026-03-11 01:00:34.277354 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:00:34.277358 | orchestrator | 2026-03-11 01:00:34.277362 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-11 01:00:34.277367 | orchestrator | Wednesday 11 March 2026 01:00:29 +0000 (0:00:00.403) 0:00:49.696 ******* 2026-03-11 01:00:34.277371 | orchestrator | changed: [testbed-manager] 2026-03-11 01:00:34.277376 | orchestrator | 2026-03-11 01:00:34.277380 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-11 01:00:34.277385 | orchestrator | Wednesday 11 March 2026 01:00:30 +0000 (0:00:01.242) 0:00:50.939 ******* 2026-03-11 01:00:34.277389 | orchestrator | changed: [testbed-manager] 2026-03-11 01:00:34.277394 | orchestrator | 2026-03-11 01:00:34.277398 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-11 01:00:34.277402 | orchestrator | Wednesday 11 March 2026 01:00:31 +0000 (0:00:00.621) 0:00:51.560 ******* 2026-03-11 01:00:34.277407 | orchestrator | changed: [testbed-manager] 2026-03-11 01:00:34.277411 | orchestrator | 2026-03-11 01:00:34.277415 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-11 01:00:34.277420 | orchestrator | Wednesday 11 March 2026 01:00:31 +0000 (0:00:00.532) 0:00:52.092 ******* 2026-03-11 01:00:34.277424 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-11 01:00:34.277429 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-11 01:00:34.277433 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-11 01:00:34.277438 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-11 01:00:34.277442 | orchestrator | 2026-03-11 01:00:34.277446 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:00:34.277451 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 01:00:34.277456 | orchestrator | 2026-03-11 01:00:34.277502 | orchestrator | 2026-03-11 01:00:34.277507 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:00:34.277511 | orchestrator | Wednesday 11 March 2026 01:00:32 +0000 (0:00:01.288) 0:00:53.381 ******* 2026-03-11 01:00:34.277516 | orchestrator | =============================================================================== 2026-03-11 01:00:34.277520 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.38s 2026-03-11 01:00:34.277525 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.55s 2026-03-11 01:00:34.277534 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.37s 2026-03-11 01:00:34.277539 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.29s 2026-03-11 01:00:34.277542 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2026-03-11 01:00:34.277546 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.24s 2026-03-11 01:00:34.277550 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.88s 2026-03-11 01:00:34.277584 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.86s 2026-03-11 01:00:34.277588 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.62s 2026-03-11 01:00:34.277592 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.53s 2026-03-11 01:00:34.277595 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.43s 2026-03-11 01:00:34.277599 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.40s 2026-03-11 01:00:34.277603 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2026-03-11 01:00:34.277608 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2026-03-11 01:00:34.280786 | orchestrator | 2026-03-11 01:00:34 | INFO  | Task 2537e14b-68ca-485a-b68d-6050c8dd4aeb is in state SUCCESS 2026-03-11 01:00:34.281627 | orchestrator | 2026-03-11 01:00:34.281684 | orchestrator | 2026-03-11 01:00:34.281704 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:00:34.281710 | orchestrator | 2026-03-11 01:00:34.281726 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:00:34.281730 | orchestrator | Wednesday 11 March 2026 00:57:57 +0000 (0:00:00.272) 0:00:00.272 ******* 2026-03-11 01:00:34.281921 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:00:34.281931 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:00:34.281938 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:00:34.281945 | orchestrator | 2026-03-11 01:00:34.281951 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:00:34.281958 | orchestrator | Wednesday 11 March 2026 00:57:57 +0000 (0:00:00.294) 0:00:00.567 ******* 2026-03-11 01:00:34.281964 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-11 01:00:34.281968 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-11 01:00:34.281972 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-11 01:00:34.281976 | orchestrator | 2026-03-11 01:00:34.281981 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-11 01:00:34.281985 | orchestrator | 2026-03-11 01:00:34.281988 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-11 01:00:34.281992 | orchestrator | Wednesday 11 March 2026 00:57:57 +0000 (0:00:00.436) 0:00:01.003 ******* 2026-03-11 01:00:34.281997 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:00:34.282002 | orchestrator | 2026-03-11 01:00:34.282006 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-11 01:00:34.282009 | orchestrator | Wednesday 11 March 2026 00:57:58 +0000 (0:00:00.554) 0:00:01.557 ******* 2026-03-11 01:00:34.282052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:34.282081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:34.282126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:34.282136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282183 | orchestrator | 2026-03-11 01:00:34.282190 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-11 01:00:34.282201 | orchestrator | Wednesday 11 March 2026 00:58:00 +0000 (0:00:01.752) 0:00:03.310 ******* 2026-03-11 01:00:34.282207 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.282212 | orchestrator | 2026-03-11 01:00:34.282219 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-11 01:00:34.282223 | orchestrator | Wednesday 11 March 2026 00:58:00 +0000 (0:00:00.155) 0:00:03.466 ******* 2026-03-11 01:00:34.282226 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.282230 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:34.282234 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:34.282238 | orchestrator | 2026-03-11 01:00:34.282242 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-11 01:00:34.282245 | orchestrator | Wednesday 11 March 2026 00:58:00 +0000 (0:00:00.428) 0:00:03.895 ******* 2026-03-11 01:00:34.282249 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:00:34.282253 | orchestrator | 2026-03-11 01:00:34.282257 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-11 01:00:34.282261 | orchestrator | Wednesday 11 March 2026 00:58:01 +0000 (0:00:00.816) 0:00:04.711 ******* 2026-03-11 01:00:34.282266 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:00:34.282269 | orchestrator | 2026-03-11 01:00:34.282273 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-11 01:00:34.282277 | orchestrator | Wednesday 11 March 2026 00:58:02 +0000 (0:00:00.573) 0:00:05.285 ******* 2026-03-11 01:00:34.282281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:34.282291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:34.282301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:34.282309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282341 | orchestrator | 2026-03-11 01:00:34.282345 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-11 01:00:34.282348 | orchestrator | Wednesday 11 March 2026 00:58:05 +0000 (0:00:03.158) 0:00:08.443 ******* 2026-03-11 01:00:34.282360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:34.282369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:34.282373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:34.282377 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.282381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:34.282385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:34.282397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:34.282403 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:34.282409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:34.282427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:34.282434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:34.282440 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:34.282446 | orchestrator | 2026-03-11 01:00:34.282451 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-11 01:00:34.282457 | orchestrator | Wednesday 11 March 2026 00:58:05 +0000 (0:00:00.555) 0:00:08.998 ******* 2026-03-11 01:00:34.282510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:34.282525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:34.282538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:34.282545 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.282551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:34.282559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:34.282566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:34.282573 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:34.282590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:34.282602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:34.282607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:34.282612 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:34.282616 | orchestrator | 2026-03-11 01:00:34.282621 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-11 01:00:34.282625 | orchestrator | Wednesday 11 March 2026 00:58:06 +0000 (0:00:00.767) 0:00:09.765 ******* 2026-03-11 01:00:34.282630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:34.282635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:34.282652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:34.282657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282695 | orchestrator | 2026-03-11 01:00:34.282699 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-11 01:00:34.282704 | orchestrator | Wednesday 11 March 2026 00:58:09 +0000 (0:00:03.142) 0:00:12.908 ******* 2026-03-11 01:00:34.282709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:34.282714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:34.282719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:34.282723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:34.282737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:34.282742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:34.282747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:34.282765 | orchestrator | 2026-03-11 01:00:34.282770 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-11 01:00:34.282774 | orchestrator | Wednesday 11 March 2026 00:58:15 +0000 (0:00:05.714) 0:00:18.622 ******* 2026-03-11 01:00:34.282778 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:34.282781 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:00:34.282785 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:00:34.282789 | orchestrator | 2026-03-11 01:00:34.282793 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-11 01:00:34.282797 | orchestrator | Wednesday 11 March 2026 00:58:17 +0000 (0:00:01.781) 0:00:20.404 ******* 2026-03-11 01:00:34.282800 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.282804 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:34.282808 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:34.282812 | orchestrator | 2026-03-11 01:00:34.282815 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-11 01:00:34.282821 | orchestrator | Wednesday 11 March 2026 00:58:17 +0000 (0:00:00.514) 0:00:20.918 ******* 2026-03-11 01:00:34.282825 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.282832 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:34.282836 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:34.282839 | orchestrator | 2026-03-11 01:00:34.282843 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-11 01:00:34.282847 | orchestrator | Wednesday 11 March 2026 00:58:18 +0000 (0:00:00.287) 0:00:21.205 ******* 2026-03-11 01:00:34.282851 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.282854 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:34.282858 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:34.282862 | orchestrator | 2026-03-11 01:00:34.282866 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-11 01:00:34.282869 | orchestrator | Wednesday 11 March 2026 00:58:18 +0000 (0:00:00.472) 0:00:21.678 ******* 2026-03-11 01:00:34.282873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:34.282878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:34.282882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:34.282889 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.282893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:34.282903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:34.282908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:34.282912 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:34.282916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:34.282921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:34.282928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:34.282931 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:34.282935 | orchestrator | 2026-03-11 01:00:34.282939 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-11 01:00:34.282943 | orchestrator | Wednesday 11 March 2026 00:58:19 +0000 (0:00:00.583) 0:00:22.261 ******* 2026-03-11 01:00:34.282947 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.282950 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:34.282954 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:34.282958 | orchestrator | 2026-03-11 01:00:34.282962 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-11 01:00:34.282965 | orchestrator | Wednesday 11 March 2026 00:58:19 +0000 (0:00:00.354) 0:00:22.616 ******* 2026-03-11 01:00:34.282969 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-11 01:00:34.282976 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-11 01:00:34.282983 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-11 01:00:34.282986 | orchestrator | 2026-03-11 01:00:34.282990 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-11 01:00:34.282994 | orchestrator | Wednesday 11 March 2026 00:58:20 +0000 (0:00:01.362) 0:00:23.978 ******* 2026-03-11 01:00:34.282998 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:00:34.283002 | orchestrator | 2026-03-11 01:00:34.283005 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-11 01:00:34.283009 | orchestrator | Wednesday 11 March 2026 00:58:21 +0000 (0:00:01.128) 0:00:25.107 ******* 2026-03-11 01:00:34.283013 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:34.283017 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.283020 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:34.283024 | orchestrator | 2026-03-11 01:00:34.283028 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-11 01:00:34.283032 | orchestrator | Wednesday 11 March 2026 00:58:22 +0000 (0:00:00.852) 0:00:25.960 ******* 2026-03-11 01:00:34.283035 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-11 01:00:34.283039 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-11 01:00:34.283043 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:00:34.283047 | orchestrator | 2026-03-11 01:00:34.283051 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-11 01:00:34.283054 | orchestrator | Wednesday 11 March 2026 00:58:23 +0000 (0:00:01.101) 0:00:27.061 ******* 2026-03-11 01:00:34.283058 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:00:34.283063 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:00:34.283066 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:00:34.283070 | orchestrator | 2026-03-11 01:00:34.283077 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-11 01:00:34.283081 | orchestrator | Wednesday 11 March 2026 00:58:24 +0000 (0:00:00.308) 0:00:27.369 ******* 2026-03-11 01:00:34.283085 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-11 01:00:34.283089 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-11 01:00:34.283092 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-11 01:00:34.283096 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-11 01:00:34.283100 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-11 01:00:34.283104 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-11 01:00:34.283108 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-11 01:00:34.283112 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-11 01:00:34.283115 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-11 01:00:34.283119 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-11 01:00:34.283123 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-11 01:00:34.283127 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-11 01:00:34.283130 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-11 01:00:34.283134 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-11 01:00:34.283138 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-11 01:00:34.283142 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-11 01:00:34.283146 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-11 01:00:34.283150 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-11 01:00:34.283154 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-11 01:00:34.283157 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-11 01:00:34.283161 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-11 01:00:34.283165 | orchestrator | 2026-03-11 01:00:34.283169 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-11 01:00:34.283172 | orchestrator | Wednesday 11 March 2026 00:58:32 +0000 (0:00:07.894) 0:00:35.263 ******* 2026-03-11 01:00:34.283176 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-11 01:00:34.283180 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-11 01:00:34.283184 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-11 01:00:34.283187 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-11 01:00:34.283191 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-11 01:00:34.283198 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-11 01:00:34.283204 | orchestrator | 2026-03-11 01:00:34.283208 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-11 01:00:34.283212 | orchestrator | Wednesday 11 March 2026 00:58:34 +0000 (0:00:02.605) 0:00:37.869 ******* 2026-03-11 01:00:34.283220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:34.283225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:34.283229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:34.283234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:34.283243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:34.283251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:34.283255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:34.283259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:34.283262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:34.283266 | orchestrator | 2026-03-11 01:00:34.283270 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-11 01:00:34.283274 | orchestrator | Wednesday 11 March 2026 00:58:36 +0000 (0:00:02.201) 0:00:40.070 ******* 2026-03-11 01:00:34.283278 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.283282 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:34.283285 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:34.283289 | orchestrator | 2026-03-11 01:00:34.283293 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-11 01:00:34.283297 | orchestrator | Wednesday 11 March 2026 00:58:37 +0000 (0:00:00.298) 0:00:40.369 ******* 2026-03-11 01:00:34.283300 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:34.283304 | orchestrator | 2026-03-11 01:00:34.283308 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-11 01:00:34.283312 | orchestrator | Wednesday 11 March 2026 00:58:39 +0000 (0:00:02.223) 0:00:42.592 ******* 2026-03-11 01:00:34.283319 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:34.283323 | orchestrator | 2026-03-11 01:00:34.283327 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-11 01:00:34.283331 | orchestrator | Wednesday 11 March 2026 00:58:42 +0000 (0:00:02.838) 0:00:45.431 ******* 2026-03-11 01:00:34.283335 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:00:34.283339 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:00:34.283342 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:00:34.283346 | orchestrator | 2026-03-11 01:00:34.283350 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-11 01:00:34.283356 | orchestrator | Wednesday 11 March 2026 00:58:43 +0000 (0:00:01.011) 0:00:46.442 ******* 2026-03-11 01:00:34.283360 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:00:34.283364 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:00:34.283370 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:00:34.283374 | orchestrator | 2026-03-11 01:00:34.283378 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-11 01:00:34.283382 | orchestrator | Wednesday 11 March 2026 00:58:43 +0000 (0:00:00.319) 0:00:46.762 ******* 2026-03-11 01:00:34.283386 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.283389 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:34.283393 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:34.283397 | orchestrator | 2026-03-11 01:00:34.283401 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-11 01:00:34.283405 | orchestrator | Wednesday 11 March 2026 00:58:43 +0000 (0:00:00.307) 0:00:47.070 ******* 2026-03-11 01:00:34.283408 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:34.283412 | orchestrator | 2026-03-11 01:00:34.283416 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-11 01:00:34.283420 | orchestrator | Wednesday 11 March 2026 00:58:58 +0000 (0:00:14.868) 0:01:01.939 ******* 2026-03-11 01:00:34.283423 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:34.283427 | orchestrator | 2026-03-11 01:00:34.283431 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-11 01:00:34.283435 | orchestrator | Wednesday 11 March 2026 00:59:10 +0000 (0:00:11.929) 0:01:13.868 ******* 2026-03-11 01:00:34.283438 | orchestrator | 2026-03-11 01:00:34.283442 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-11 01:00:34.283446 | orchestrator | Wednesday 11 March 2026 00:59:10 +0000 (0:00:00.068) 0:01:13.936 ******* 2026-03-11 01:00:34.283450 | orchestrator | 2026-03-11 01:00:34.283454 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-11 01:00:34.283522 | orchestrator | Wednesday 11 March 2026 00:59:10 +0000 (0:00:00.078) 0:01:14.015 ******* 2026-03-11 01:00:34.283528 | orchestrator | 2026-03-11 01:00:34.283532 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-11 01:00:34.283535 | orchestrator | Wednesday 11 March 2026 00:59:10 +0000 (0:00:00.066) 0:01:14.081 ******* 2026-03-11 01:00:34.283539 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:34.283543 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:00:34.283547 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:00:34.283551 | orchestrator | 2026-03-11 01:00:34.283555 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-11 01:00:34.283558 | orchestrator | Wednesday 11 March 2026 00:59:25 +0000 (0:00:14.658) 0:01:28.739 ******* 2026-03-11 01:00:34.283562 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:34.283566 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:00:34.283570 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:00:34.283574 | orchestrator | 2026-03-11 01:00:34.283577 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-11 01:00:34.283581 | orchestrator | Wednesday 11 March 2026 00:59:29 +0000 (0:00:04.095) 0:01:32.835 ******* 2026-03-11 01:00:34.283585 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:34.283589 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:00:34.283592 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:00:34.283601 | orchestrator | 2026-03-11 01:00:34.283605 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-11 01:00:34.283608 | orchestrator | Wednesday 11 March 2026 00:59:35 +0000 (0:00:05.862) 0:01:38.697 ******* 2026-03-11 01:00:34.283612 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:00:34.283616 | orchestrator | 2026-03-11 01:00:34.283620 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-11 01:00:34.283624 | orchestrator | Wednesday 11 March 2026 00:59:36 +0000 (0:00:00.682) 0:01:39.379 ******* 2026-03-11 01:00:34.283627 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:00:34.283631 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:00:34.283635 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:00:34.283639 | orchestrator | 2026-03-11 01:00:34.283642 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-11 01:00:34.283646 | orchestrator | Wednesday 11 March 2026 00:59:37 +0000 (0:00:00.814) 0:01:40.194 ******* 2026-03-11 01:00:34.283650 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:34.283654 | orchestrator | 2026-03-11 01:00:34.283658 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-11 01:00:34.283661 | orchestrator | Wednesday 11 March 2026 00:59:38 +0000 (0:00:01.634) 0:01:41.829 ******* 2026-03-11 01:00:34.283665 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-11 01:00:34.283669 | orchestrator | 2026-03-11 01:00:34.283673 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-11 01:00:34.283677 | orchestrator | Wednesday 11 March 2026 00:59:51 +0000 (0:00:12.564) 0:01:54.393 ******* 2026-03-11 01:00:34.283680 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-11 01:00:34.283684 | orchestrator | 2026-03-11 01:00:34.283688 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-11 01:00:34.283692 | orchestrator | Wednesday 11 March 2026 01:00:19 +0000 (0:00:28.211) 0:02:22.604 ******* 2026-03-11 01:00:34.283696 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-11 01:00:34.283700 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-11 01:00:34.283704 | orchestrator | 2026-03-11 01:00:34.283708 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-11 01:00:34.283711 | orchestrator | Wednesday 11 March 2026 01:00:27 +0000 (0:00:07.799) 0:02:30.404 ******* 2026-03-11 01:00:34.283715 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.283719 | orchestrator | 2026-03-11 01:00:34.283723 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-11 01:00:34.283727 | orchestrator | Wednesday 11 March 2026 01:00:27 +0000 (0:00:00.107) 0:02:30.512 ******* 2026-03-11 01:00:34.283730 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.283734 | orchestrator | 2026-03-11 01:00:34.283740 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-11 01:00:34.283747 | orchestrator | Wednesday 11 March 2026 01:00:27 +0000 (0:00:00.112) 0:02:30.624 ******* 2026-03-11 01:00:34.283751 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.283755 | orchestrator | 2026-03-11 01:00:34.283759 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-11 01:00:34.283762 | orchestrator | Wednesday 11 March 2026 01:00:27 +0000 (0:00:00.110) 0:02:30.734 ******* 2026-03-11 01:00:34.283766 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.283770 | orchestrator | 2026-03-11 01:00:34.283774 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-11 01:00:34.283777 | orchestrator | Wednesday 11 March 2026 01:00:28 +0000 (0:00:00.410) 0:02:31.145 ******* 2026-03-11 01:00:34.283781 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:00:34.283785 | orchestrator | 2026-03-11 01:00:34.283788 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-11 01:00:34.283792 | orchestrator | Wednesday 11 March 2026 01:00:31 +0000 (0:00:03.653) 0:02:34.798 ******* 2026-03-11 01:00:34.283800 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:34.283804 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:34.283808 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:34.283812 | orchestrator | 2026-03-11 01:00:34.283815 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:00:34.283819 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-11 01:00:34.283824 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-11 01:00:34.283828 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-11 01:00:34.283832 | orchestrator | 2026-03-11 01:00:34.283836 | orchestrator | 2026-03-11 01:00:34.283840 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:00:34.283843 | orchestrator | Wednesday 11 March 2026 01:00:32 +0000 (0:00:00.361) 0:02:35.160 ******* 2026-03-11 01:00:34.283847 | orchestrator | =============================================================================== 2026-03-11 01:00:34.283851 | orchestrator | service-ks-register : keystone | Creating services --------------------- 28.21s 2026-03-11 01:00:34.283855 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.87s 2026-03-11 01:00:34.283859 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 14.66s 2026-03-11 01:00:34.283863 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.56s 2026-03-11 01:00:34.283866 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.93s 2026-03-11 01:00:34.283870 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 7.89s 2026-03-11 01:00:34.283874 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.80s 2026-03-11 01:00:34.283877 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.86s 2026-03-11 01:00:34.283881 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.71s 2026-03-11 01:00:34.283885 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.10s 2026-03-11 01:00:34.283889 | orchestrator | keystone : Creating default user role ----------------------------------- 3.65s 2026-03-11 01:00:34.283893 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.16s 2026-03-11 01:00:34.283896 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.14s 2026-03-11 01:00:34.283900 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.84s 2026-03-11 01:00:34.283904 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.61s 2026-03-11 01:00:34.283907 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.22s 2026-03-11 01:00:34.283911 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.20s 2026-03-11 01:00:34.283915 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.78s 2026-03-11 01:00:34.283919 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.75s 2026-03-11 01:00:34.283923 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.63s 2026-03-11 01:00:34.283926 | orchestrator | 2026-03-11 01:00:34 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:00:34.283930 | orchestrator | 2026-03-11 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:37.328112 | orchestrator | 2026-03-11 01:00:37 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:00:37.328236 | orchestrator | 2026-03-11 01:00:37 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:00:37.328604 | orchestrator | 2026-03-11 01:00:37 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:00:37.329287 | orchestrator | 2026-03-11 01:00:37 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:00:37.330060 | orchestrator | 2026-03-11 01:00:37 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:00:37.330110 | orchestrator | 2026-03-11 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:40.361579 | orchestrator | 2026-03-11 01:00:40 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:00:40.362213 | orchestrator | 2026-03-11 01:00:40 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:00:40.363354 | orchestrator | 2026-03-11 01:00:40 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:00:40.364413 | orchestrator | 2026-03-11 01:00:40 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:00:40.365396 | orchestrator | 2026-03-11 01:00:40 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:00:40.365429 | orchestrator | 2026-03-11 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:43.402151 | orchestrator | 2026-03-11 01:00:43 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:00:43.403789 | orchestrator | 2026-03-11 01:00:43 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:00:43.405262 | orchestrator | 2026-03-11 01:00:43 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:00:43.407032 | orchestrator | 2026-03-11 01:00:43 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:00:43.409575 | orchestrator | 2026-03-11 01:00:43 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:00:43.409627 | orchestrator | 2026-03-11 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:46.451410 | orchestrator | 2026-03-11 01:00:46 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:00:46.453889 | orchestrator | 2026-03-11 01:00:46 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:00:46.456510 | orchestrator | 2026-03-11 01:00:46 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:00:46.458597 | orchestrator | 2026-03-11 01:00:46 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:00:46.460497 | orchestrator | 2026-03-11 01:00:46 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:00:46.460967 | orchestrator | 2026-03-11 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:49.501611 | orchestrator | 2026-03-11 01:00:49 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:00:49.503359 | orchestrator | 2026-03-11 01:00:49 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:00:49.505820 | orchestrator | 2026-03-11 01:00:49 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:00:49.507463 | orchestrator | 2026-03-11 01:00:49 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:00:49.508851 | orchestrator | 2026-03-11 01:00:49 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:00:49.508890 | orchestrator | 2026-03-11 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:52.543714 | orchestrator | 2026-03-11 01:00:52 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:00:52.544696 | orchestrator | 2026-03-11 01:00:52 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:00:52.546163 | orchestrator | 2026-03-11 01:00:52 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:00:52.547299 | orchestrator | 2026-03-11 01:00:52 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:00:52.548496 | orchestrator | 2026-03-11 01:00:52 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:00:52.548758 | orchestrator | 2026-03-11 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:55.585700 | orchestrator | 2026-03-11 01:00:55 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:00:55.586802 | orchestrator | 2026-03-11 01:00:55 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:00:55.588314 | orchestrator | 2026-03-11 01:00:55 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:00:55.589705 | orchestrator | 2026-03-11 01:00:55 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:00:55.591068 | orchestrator | 2026-03-11 01:00:55 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:00:55.591279 | orchestrator | 2026-03-11 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:58.631916 | orchestrator | 2026-03-11 01:00:58 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:00:58.632926 | orchestrator | 2026-03-11 01:00:58 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:00:58.634350 | orchestrator | 2026-03-11 01:00:58 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:00:58.635608 | orchestrator | 2026-03-11 01:00:58 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:00:58.636938 | orchestrator | 2026-03-11 01:00:58 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:00:58.636979 | orchestrator | 2026-03-11 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:01.673205 | orchestrator | 2026-03-11 01:01:01 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:01.674832 | orchestrator | 2026-03-11 01:01:01 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:01.675845 | orchestrator | 2026-03-11 01:01:01 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:01.676760 | orchestrator | 2026-03-11 01:01:01 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:01.677647 | orchestrator | 2026-03-11 01:01:01 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:01.677672 | orchestrator | 2026-03-11 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:04.714603 | orchestrator | 2026-03-11 01:01:04 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:04.716239 | orchestrator | 2026-03-11 01:01:04 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:04.719291 | orchestrator | 2026-03-11 01:01:04 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:04.721391 | orchestrator | 2026-03-11 01:01:04 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:04.723077 | orchestrator | 2026-03-11 01:01:04 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:04.723121 | orchestrator | 2026-03-11 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:07.758699 | orchestrator | 2026-03-11 01:01:07 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:07.760074 | orchestrator | 2026-03-11 01:01:07 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:07.762055 | orchestrator | 2026-03-11 01:01:07 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:07.763913 | orchestrator | 2026-03-11 01:01:07 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:07.765642 | orchestrator | 2026-03-11 01:01:07 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:07.765701 | orchestrator | 2026-03-11 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:10.813572 | orchestrator | 2026-03-11 01:01:10 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:10.816500 | orchestrator | 2026-03-11 01:01:10 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:10.819002 | orchestrator | 2026-03-11 01:01:10 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:10.821103 | orchestrator | 2026-03-11 01:01:10 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:10.823166 | orchestrator | 2026-03-11 01:01:10 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:10.823627 | orchestrator | 2026-03-11 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:13.850367 | orchestrator | 2026-03-11 01:01:13 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:13.850584 | orchestrator | 2026-03-11 01:01:13 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:13.851377 | orchestrator | 2026-03-11 01:01:13 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:13.852450 | orchestrator | 2026-03-11 01:01:13 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:13.853182 | orchestrator | 2026-03-11 01:01:13 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:13.855146 | orchestrator | 2026-03-11 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:16.877051 | orchestrator | 2026-03-11 01:01:16 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:16.877671 | orchestrator | 2026-03-11 01:01:16 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:16.878533 | orchestrator | 2026-03-11 01:01:16 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:16.879438 | orchestrator | 2026-03-11 01:01:16 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:16.879979 | orchestrator | 2026-03-11 01:01:16 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:16.880346 | orchestrator | 2026-03-11 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:19.909049 | orchestrator | 2026-03-11 01:01:19 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:19.909934 | orchestrator | 2026-03-11 01:01:19 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:19.910284 | orchestrator | 2026-03-11 01:01:19 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:19.911052 | orchestrator | 2026-03-11 01:01:19 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:19.911522 | orchestrator | 2026-03-11 01:01:19 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:19.911548 | orchestrator | 2026-03-11 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:22.943826 | orchestrator | 2026-03-11 01:01:22 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:22.945121 | orchestrator | 2026-03-11 01:01:22 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:22.945736 | orchestrator | 2026-03-11 01:01:22 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:22.946628 | orchestrator | 2026-03-11 01:01:22 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:22.947373 | orchestrator | 2026-03-11 01:01:22 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:22.947392 | orchestrator | 2026-03-11 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:25.977857 | orchestrator | 2026-03-11 01:01:25 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:25.977944 | orchestrator | 2026-03-11 01:01:25 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:25.978788 | orchestrator | 2026-03-11 01:01:25 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:25.983541 | orchestrator | 2026-03-11 01:01:25 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:25.984139 | orchestrator | 2026-03-11 01:01:25 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:25.984186 | orchestrator | 2026-03-11 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:29.016174 | orchestrator | 2026-03-11 01:01:29 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:29.017161 | orchestrator | 2026-03-11 01:01:29 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:29.017907 | orchestrator | 2026-03-11 01:01:29 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:29.018991 | orchestrator | 2026-03-11 01:01:29 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:29.019802 | orchestrator | 2026-03-11 01:01:29 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:29.019839 | orchestrator | 2026-03-11 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:32.044660 | orchestrator | 2026-03-11 01:01:32 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:32.044984 | orchestrator | 2026-03-11 01:01:32 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:32.046472 | orchestrator | 2026-03-11 01:01:32 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:32.046919 | orchestrator | 2026-03-11 01:01:32 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:32.047625 | orchestrator | 2026-03-11 01:01:32 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:32.047668 | orchestrator | 2026-03-11 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:35.071289 | orchestrator | 2026-03-11 01:01:35 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:35.071745 | orchestrator | 2026-03-11 01:01:35 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:35.073103 | orchestrator | 2026-03-11 01:01:35 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:35.073163 | orchestrator | 2026-03-11 01:01:35 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:35.073707 | orchestrator | 2026-03-11 01:01:35 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:35.073810 | orchestrator | 2026-03-11 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:38.096142 | orchestrator | 2026-03-11 01:01:38 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:38.096555 | orchestrator | 2026-03-11 01:01:38 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:38.097227 | orchestrator | 2026-03-11 01:01:38 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:38.097968 | orchestrator | 2026-03-11 01:01:38 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:38.098519 | orchestrator | 2026-03-11 01:01:38 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:38.098593 | orchestrator | 2026-03-11 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:41.127767 | orchestrator | 2026-03-11 01:01:41 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:41.130652 | orchestrator | 2026-03-11 01:01:41 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:41.133199 | orchestrator | 2026-03-11 01:01:41 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:41.134146 | orchestrator | 2026-03-11 01:01:41 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:41.135573 | orchestrator | 2026-03-11 01:01:41 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:41.135688 | orchestrator | 2026-03-11 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:44.157312 | orchestrator | 2026-03-11 01:01:44 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:44.157445 | orchestrator | 2026-03-11 01:01:44 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:44.158306 | orchestrator | 2026-03-11 01:01:44 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:44.158912 | orchestrator | 2026-03-11 01:01:44 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:44.159525 | orchestrator | 2026-03-11 01:01:44 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:44.159551 | orchestrator | 2026-03-11 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:47.185039 | orchestrator | 2026-03-11 01:01:47 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:47.185132 | orchestrator | 2026-03-11 01:01:47 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:47.185910 | orchestrator | 2026-03-11 01:01:47 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:47.185955 | orchestrator | 2026-03-11 01:01:47 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:47.186638 | orchestrator | 2026-03-11 01:01:47 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:47.186703 | orchestrator | 2026-03-11 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:50.216243 | orchestrator | 2026-03-11 01:01:50 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:50.216579 | orchestrator | 2026-03-11 01:01:50 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:50.217423 | orchestrator | 2026-03-11 01:01:50 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:50.218107 | orchestrator | 2026-03-11 01:01:50 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:50.218704 | orchestrator | 2026-03-11 01:01:50 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:50.218727 | orchestrator | 2026-03-11 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:53.251534 | orchestrator | 2026-03-11 01:01:53 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:53.252230 | orchestrator | 2026-03-11 01:01:53 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:53.252993 | orchestrator | 2026-03-11 01:01:53 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:53.253864 | orchestrator | 2026-03-11 01:01:53 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:53.254525 | orchestrator | 2026-03-11 01:01:53 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:53.254648 | orchestrator | 2026-03-11 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:56.309827 | orchestrator | 2026-03-11 01:01:56 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:56.310394 | orchestrator | 2026-03-11 01:01:56 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:56.311319 | orchestrator | 2026-03-11 01:01:56 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:56.313252 | orchestrator | 2026-03-11 01:01:56 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:56.314206 | orchestrator | 2026-03-11 01:01:56 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:56.314250 | orchestrator | 2026-03-11 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:59.336288 | orchestrator | 2026-03-11 01:01:59 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:01:59.337033 | orchestrator | 2026-03-11 01:01:59 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:01:59.337877 | orchestrator | 2026-03-11 01:01:59 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:01:59.338655 | orchestrator | 2026-03-11 01:01:59 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:01:59.340720 | orchestrator | 2026-03-11 01:01:59 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:01:59.340767 | orchestrator | 2026-03-11 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:02.363619 | orchestrator | 2026-03-11 01:02:02 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:02.364152 | orchestrator | 2026-03-11 01:02:02 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:02:02.364805 | orchestrator | 2026-03-11 01:02:02 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:02:02.365524 | orchestrator | 2026-03-11 01:02:02 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:02:02.366169 | orchestrator | 2026-03-11 01:02:02 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:02.366192 | orchestrator | 2026-03-11 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:05.393427 | orchestrator | 2026-03-11 01:02:05 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:05.394557 | orchestrator | 2026-03-11 01:02:05 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:02:05.396161 | orchestrator | 2026-03-11 01:02:05 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:02:05.397626 | orchestrator | 2026-03-11 01:02:05 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state STARTED 2026-03-11 01:02:05.399669 | orchestrator | 2026-03-11 01:02:05 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:05.399815 | orchestrator | 2026-03-11 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:08.432273 | orchestrator | 2026-03-11 01:02:08 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:08.433884 | orchestrator | 2026-03-11 01:02:08 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:02:08.434643 | orchestrator | 2026-03-11 01:02:08 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:02:08.435215 | orchestrator | 2026-03-11 01:02:08 | INFO  | Task 9d11eb93-af99-48ec-82fa-d43b94c37317 is in state SUCCESS 2026-03-11 01:02:08.435982 | orchestrator | 2026-03-11 01:02:08 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:08.436021 | orchestrator | 2026-03-11 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:11.468980 | orchestrator | 2026-03-11 01:02:11 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:11.470886 | orchestrator | 2026-03-11 01:02:11 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:02:11.472286 | orchestrator | 2026-03-11 01:02:11 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:02:11.473783 | orchestrator | 2026-03-11 01:02:11 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:11.474094 | orchestrator | 2026-03-11 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:14.501263 | orchestrator | 2026-03-11 01:02:14 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:14.501396 | orchestrator | 2026-03-11 01:02:14 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:02:14.501406 | orchestrator | 2026-03-11 01:02:14 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:02:14.501412 | orchestrator | 2026-03-11 01:02:14 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:14.501423 | orchestrator | 2026-03-11 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:17.576410 | orchestrator | 2026-03-11 01:02:17 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:17.576459 | orchestrator | 2026-03-11 01:02:17 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:02:17.576466 | orchestrator | 2026-03-11 01:02:17 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:02:17.576470 | orchestrator | 2026-03-11 01:02:17 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:17.576486 | orchestrator | 2026-03-11 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:20.588654 | orchestrator | 2026-03-11 01:02:20 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:20.589188 | orchestrator | 2026-03-11 01:02:20 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:02:20.591429 | orchestrator | 2026-03-11 01:02:20 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:02:20.592085 | orchestrator | 2026-03-11 01:02:20 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:20.592114 | orchestrator | 2026-03-11 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:23.617195 | orchestrator | 2026-03-11 01:02:23 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:23.617788 | orchestrator | 2026-03-11 01:02:23 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state STARTED 2026-03-11 01:02:23.618594 | orchestrator | 2026-03-11 01:02:23 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:02:23.619101 | orchestrator | 2026-03-11 01:02:23 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:23.619287 | orchestrator | 2026-03-11 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:26.642246 | orchestrator | 2026-03-11 01:02:26 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:26.642389 | orchestrator | 2026-03-11 01:02:26 | INFO  | Task d91c4b71-681b-4c9c-8afd-277fcb477eb5 is in state SUCCESS 2026-03-11 01:02:26.643055 | orchestrator | 2026-03-11 01:02:26 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:02:26.643719 | orchestrator | 2026-03-11 01:02:26 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:02:26.644476 | orchestrator | 2026-03-11 01:02:26 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:26.644502 | orchestrator | 2026-03-11 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:29.664174 | orchestrator | 2026-03-11 01:02:29 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:29.695053 | orchestrator | 2026-03-11 01:02:29 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state STARTED 2026-03-11 01:02:29.695926 | orchestrator | 2026-03-11 01:02:29 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:02:29.695973 | orchestrator | 2026-03-11 01:02:29 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:29.695984 | orchestrator | 2026-03-11 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:32.696425 | orchestrator | 2026-03-11 01:02:32 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:32.697379 | orchestrator | 2026-03-11 01:02:32 | INFO  | Task c9b5380d-23da-4831-9cf0-1222afffaedc is in state SUCCESS 2026-03-11 01:02:32.698485 | orchestrator | 2026-03-11 01:02:32.698517 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-11 01:02:32.698521 | orchestrator | 2.16.14 2026-03-11 01:02:32.698525 | orchestrator | 2026-03-11 01:02:32.698529 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-11 01:02:32.698533 | orchestrator | 2026-03-11 01:02:32.698536 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-11 01:02:32.698539 | orchestrator | Wednesday 11 March 2026 01:00:36 +0000 (0:00:00.201) 0:00:00.201 ******* 2026-03-11 01:02:32.698556 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:32.698560 | orchestrator | 2026-03-11 01:02:32.698563 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-11 01:02:32.698567 | orchestrator | Wednesday 11 March 2026 01:00:38 +0000 (0:00:01.253) 0:00:01.455 ******* 2026-03-11 01:02:32.698570 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:32.698573 | orchestrator | 2026-03-11 01:02:32.698576 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-11 01:02:32.698579 | orchestrator | Wednesday 11 March 2026 01:00:39 +0000 (0:00:00.895) 0:00:02.351 ******* 2026-03-11 01:02:32.698583 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:32.698586 | orchestrator | 2026-03-11 01:02:32.698589 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-11 01:02:32.698592 | orchestrator | Wednesday 11 March 2026 01:00:39 +0000 (0:00:00.914) 0:00:03.265 ******* 2026-03-11 01:02:32.698595 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:32.698598 | orchestrator | 2026-03-11 01:02:32.698601 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-11 01:02:32.698604 | orchestrator | Wednesday 11 March 2026 01:00:40 +0000 (0:00:01.029) 0:00:04.295 ******* 2026-03-11 01:02:32.698608 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:32.698611 | orchestrator | 2026-03-11 01:02:32.698614 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-11 01:02:32.698617 | orchestrator | Wednesday 11 March 2026 01:00:41 +0000 (0:00:00.833) 0:00:05.129 ******* 2026-03-11 01:02:32.698620 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:32.698623 | orchestrator | 2026-03-11 01:02:32.698627 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-11 01:02:32.698630 | orchestrator | Wednesday 11 March 2026 01:00:42 +0000 (0:00:00.848) 0:00:05.977 ******* 2026-03-11 01:02:32.698633 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:32.698636 | orchestrator | 2026-03-11 01:02:32.698639 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-11 01:02:32.698642 | orchestrator | Wednesday 11 March 2026 01:00:43 +0000 (0:00:01.219) 0:00:07.197 ******* 2026-03-11 01:02:32.698645 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:32.698648 | orchestrator | 2026-03-11 01:02:32.698652 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-11 01:02:32.698655 | orchestrator | Wednesday 11 March 2026 01:00:44 +0000 (0:00:01.095) 0:00:08.293 ******* 2026-03-11 01:02:32.698658 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:32.698661 | orchestrator | 2026-03-11 01:02:32.698664 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-11 01:02:32.698667 | orchestrator | Wednesday 11 March 2026 01:01:42 +0000 (0:00:57.606) 0:01:05.899 ******* 2026-03-11 01:02:32.698670 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:02:32.698712 | orchestrator | 2026-03-11 01:02:32.698718 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-11 01:02:32.698723 | orchestrator | 2026-03-11 01:02:32.698728 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-11 01:02:32.698764 | orchestrator | Wednesday 11 March 2026 01:01:42 +0000 (0:00:00.146) 0:01:06.046 ******* 2026-03-11 01:02:32.698770 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:32.698800 | orchestrator | 2026-03-11 01:02:32.698991 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-11 01:02:32.699024 | orchestrator | 2026-03-11 01:02:32.699028 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-11 01:02:32.699032 | orchestrator | Wednesday 11 March 2026 01:01:54 +0000 (0:00:11.416) 0:01:17.462 ******* 2026-03-11 01:02:32.699035 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:02:32.699038 | orchestrator | 2026-03-11 01:02:32.699042 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-11 01:02:32.699045 | orchestrator | 2026-03-11 01:02:32.699048 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-11 01:02:32.699057 | orchestrator | Wednesday 11 March 2026 01:02:05 +0000 (0:00:11.046) 0:01:28.509 ******* 2026-03-11 01:02:32.699060 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:02:32.699063 | orchestrator | 2026-03-11 01:02:32.699066 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:02:32.699070 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 01:02:32.699080 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:02:32.699084 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:02:32.699087 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:02:32.699090 | orchestrator | 2026-03-11 01:02:32.699101 | orchestrator | 2026-03-11 01:02:32.699108 | orchestrator | 2026-03-11 01:02:32.699111 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:02:32.699121 | orchestrator | Wednesday 11 March 2026 01:02:06 +0000 (0:00:01.055) 0:01:29.565 ******* 2026-03-11 01:02:32.699124 | orchestrator | =============================================================================== 2026-03-11 01:02:32.699127 | orchestrator | Create admin user ------------------------------------------------------ 57.61s 2026-03-11 01:02:32.699161 | orchestrator | Restart ceph manager service ------------------------------------------- 23.52s 2026-03-11 01:02:32.699169 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.25s 2026-03-11 01:02:32.699174 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.22s 2026-03-11 01:02:32.699178 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.10s 2026-03-11 01:02:32.699183 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.03s 2026-03-11 01:02:32.699188 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.91s 2026-03-11 01:02:32.699193 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.90s 2026-03-11 01:02:32.699198 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.85s 2026-03-11 01:02:32.699203 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.83s 2026-03-11 01:02:32.699208 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2026-03-11 01:02:32.699214 | orchestrator | 2026-03-11 01:02:32.699219 | orchestrator | 2026-03-11 01:02:32.699223 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-11 01:02:32.699229 | orchestrator | 2026-03-11 01:02:32.699233 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-11 01:02:32.699236 | orchestrator | Wednesday 11 March 2026 01:00:37 +0000 (0:00:00.091) 0:00:00.091 ******* 2026-03-11 01:02:32.699240 | orchestrator | changed: [localhost] 2026-03-11 01:02:32.699243 | orchestrator | 2026-03-11 01:02:32.699246 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-11 01:02:32.699249 | orchestrator | Wednesday 11 March 2026 01:00:38 +0000 (0:00:00.892) 0:00:00.984 ******* 2026-03-11 01:02:32.699252 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-03-11 01:02:32.699256 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-03-11 01:02:32.699259 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2026-03-11 01:02:32.699262 | orchestrator | changed: [localhost] 2026-03-11 01:02:32.699265 | orchestrator | 2026-03-11 01:02:32.699268 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-11 01:02:32.699272 | orchestrator | Wednesday 11 March 2026 01:02:18 +0000 (0:01:40.052) 0:01:41.036 ******* 2026-03-11 01:02:32.699279 | orchestrator | changed: [localhost] 2026-03-11 01:02:32.699282 | orchestrator | 2026-03-11 01:02:32.699285 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:02:32.699288 | orchestrator | 2026-03-11 01:02:32.699292 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:02:32.699295 | orchestrator | Wednesday 11 March 2026 01:02:22 +0000 (0:00:04.835) 0:01:45.872 ******* 2026-03-11 01:02:32.699298 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:02:32.699301 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:02:32.699304 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:02:32.699321 | orchestrator | 2026-03-11 01:02:32.699329 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:02:32.699336 | orchestrator | Wednesday 11 March 2026 01:02:23 +0000 (0:00:00.278) 0:01:46.150 ******* 2026-03-11 01:02:32.699341 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-11 01:02:32.699346 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-11 01:02:32.699351 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-11 01:02:32.699356 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-11 01:02:32.699361 | orchestrator | 2026-03-11 01:02:32.699365 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-11 01:02:32.699369 | orchestrator | skipping: no hosts matched 2026-03-11 01:02:32.699374 | orchestrator | 2026-03-11 01:02:32.699378 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:02:32.699382 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:02:32.699388 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:02:32.699393 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:02:32.699397 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:02:32.699401 | orchestrator | 2026-03-11 01:02:32.699405 | orchestrator | 2026-03-11 01:02:32.699410 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:02:32.699419 | orchestrator | Wednesday 11 March 2026 01:02:24 +0000 (0:00:01.264) 0:01:47.414 ******* 2026-03-11 01:02:32.699423 | orchestrator | =============================================================================== 2026-03-11 01:02:32.699428 | orchestrator | Download ironic-agent initramfs --------------------------------------- 100.05s 2026-03-11 01:02:32.699432 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.84s 2026-03-11 01:02:32.699437 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.26s 2026-03-11 01:02:32.699441 | orchestrator | Ensure the destination directory exists --------------------------------- 0.89s 2026-03-11 01:02:32.699445 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-03-11 01:02:32.699450 | orchestrator | 2026-03-11 01:02:32.699454 | orchestrator | 2026-03-11 01:02:32.699458 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:02:32.699463 | orchestrator | 2026-03-11 01:02:32.699467 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:02:32.699477 | orchestrator | Wednesday 11 March 2026 01:00:37 +0000 (0:00:00.488) 0:00:00.488 ******* 2026-03-11 01:02:32.699482 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:02:32.699486 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:02:32.699491 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:02:32.699495 | orchestrator | 2026-03-11 01:02:32.699500 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:02:32.699506 | orchestrator | Wednesday 11 March 2026 01:00:38 +0000 (0:00:00.284) 0:00:00.773 ******* 2026-03-11 01:02:32.699516 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-11 01:02:32.699521 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-11 01:02:32.699526 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-11 01:02:32.699530 | orchestrator | 2026-03-11 01:02:32.699534 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-11 01:02:32.699539 | orchestrator | 2026-03-11 01:02:32.699543 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-11 01:02:32.699548 | orchestrator | Wednesday 11 March 2026 01:00:38 +0000 (0:00:00.439) 0:00:01.212 ******* 2026-03-11 01:02:32.699552 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:02:32.699557 | orchestrator | 2026-03-11 01:02:32.699562 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-11 01:02:32.699567 | orchestrator | Wednesday 11 March 2026 01:00:39 +0000 (0:00:00.506) 0:00:01.718 ******* 2026-03-11 01:02:32.699572 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-11 01:02:32.699577 | orchestrator | 2026-03-11 01:02:32.699582 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-11 01:02:32.699587 | orchestrator | Wednesday 11 March 2026 01:00:43 +0000 (0:00:04.041) 0:00:05.760 ******* 2026-03-11 01:02:32.699592 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-11 01:02:32.699597 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-11 01:02:32.699601 | orchestrator | 2026-03-11 01:02:32.699606 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-11 01:02:32.699611 | orchestrator | Wednesday 11 March 2026 01:00:49 +0000 (0:00:06.527) 0:00:12.288 ******* 2026-03-11 01:02:32.699616 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:02:32.699621 | orchestrator | 2026-03-11 01:02:32.699626 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-11 01:02:32.699631 | orchestrator | Wednesday 11 March 2026 01:00:52 +0000 (0:00:03.206) 0:00:15.494 ******* 2026-03-11 01:02:32.699636 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:02:32.699641 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-11 01:02:32.699646 | orchestrator | 2026-03-11 01:02:32.699651 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-11 01:02:32.699654 | orchestrator | Wednesday 11 March 2026 01:00:56 +0000 (0:00:03.655) 0:00:19.150 ******* 2026-03-11 01:02:32.699657 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:02:32.699661 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-11 01:02:32.699673 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-11 01:02:32.699677 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-11 01:02:32.699684 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-11 01:02:32.699688 | orchestrator | 2026-03-11 01:02:32.699692 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-11 01:02:32.699695 | orchestrator | Wednesday 11 March 2026 01:01:12 +0000 (0:00:15.751) 0:00:34.901 ******* 2026-03-11 01:02:32.699699 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-11 01:02:32.699703 | orchestrator | 2026-03-11 01:02:32.699706 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-11 01:02:32.699710 | orchestrator | Wednesday 11 March 2026 01:01:16 +0000 (0:00:03.802) 0:00:38.704 ******* 2026-03-11 01:02:32.699720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:32.699736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:32.699741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:32.699745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.699750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.699756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.699771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.699775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.699779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.699783 | orchestrator | 2026-03-11 01:02:32.699787 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-11 01:02:32.699790 | orchestrator | Wednesday 11 March 2026 01:01:17 +0000 (0:00:01.881) 0:00:40.586 ******* 2026-03-11 01:02:32.699795 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-11 01:02:32.699800 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-11 01:02:32.699805 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-11 01:02:32.699810 | orchestrator | 2026-03-11 01:02:32.699815 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-11 01:02:32.699819 | orchestrator | Wednesday 11 March 2026 01:01:19 +0000 (0:00:01.117) 0:00:41.703 ******* 2026-03-11 01:02:32.699824 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:02:32.699829 | orchestrator | 2026-03-11 01:02:32.699834 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-11 01:02:32.699839 | orchestrator | Wednesday 11 March 2026 01:01:19 +0000 (0:00:00.089) 0:00:41.792 ******* 2026-03-11 01:02:32.699845 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:02:32.699850 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:02:32.699856 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:02:32.699861 | orchestrator | 2026-03-11 01:02:32.699866 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-11 01:02:32.699870 | orchestrator | Wednesday 11 March 2026 01:01:19 +0000 (0:00:00.337) 0:00:42.130 ******* 2026-03-11 01:02:32.699874 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:02:32.699882 | orchestrator | 2026-03-11 01:02:32.699888 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-11 01:02:32.699894 | orchestrator | Wednesday 11 March 2026 01:01:19 +0000 (0:00:00.402) 0:00:42.532 ******* 2026-03-11 01:02:32.699905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:32.699916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:32.699922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:32.699928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.699933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.699942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.699950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.699960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.699965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.699970 | orchestrator | 2026-03-11 01:02:32.699975 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-11 01:02:32.699980 | orchestrator | Wednesday 11 March 2026 01:01:23 +0000 (0:00:03.186) 0:00:45.719 ******* 2026-03-11 01:02:32.699985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:32.699995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700009 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:02:32.700018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:32.700024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:32.700039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700050 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:02:32.700058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700064 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:02:32.700069 | orchestrator | 2026-03-11 01:02:32.700072 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-11 01:02:32.700075 | orchestrator | Wednesday 11 March 2026 01:01:24 +0000 (0:00:01.238) 0:00:46.957 ******* 2026-03-11 01:02:32.700081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:32.700084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700094 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:02:32.700097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:32.700103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700115 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:02:32.700120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:32.700129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700139 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:02:32.700144 | orchestrator | 2026-03-11 01:02:32.700149 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-11 01:02:32.700155 | orchestrator | Wednesday 11 March 2026 01:01:25 +0000 (0:00:00.836) 0:00:47.793 ******* 2026-03-11 01:02:32.700164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:32.700171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:32.700174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:32.700181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700229 | orchestrator | 2026-03-11 01:02:32.700234 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-11 01:02:32.700239 | orchestrator | Wednesday 11 March 2026 01:01:28 +0000 (0:00:03.499) 0:00:51.293 ******* 2026-03-11 01:02:32.700244 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:32.700249 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:02:32.700255 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:02:32.700260 | orchestrator | 2026-03-11 01:02:32.700266 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-11 01:02:32.700271 | orchestrator | Wednesday 11 March 2026 01:01:31 +0000 (0:00:02.721) 0:00:54.015 ******* 2026-03-11 01:02:32.700275 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:02:32.700278 | orchestrator | 2026-03-11 01:02:32.700281 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-11 01:02:32.700284 | orchestrator | Wednesday 11 March 2026 01:01:32 +0000 (0:00:00.898) 0:00:54.913 ******* 2026-03-11 01:02:32.700287 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:02:32.700290 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:02:32.700294 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:02:32.700297 | orchestrator | 2026-03-11 01:02:32.700300 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-11 01:02:32.700303 | orchestrator | Wednesday 11 March 2026 01:01:33 +0000 (0:00:01.021) 0:00:55.935 ******* 2026-03-11 01:02:32.700306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:32.700347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:32.700355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:32.700362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700390 | orchestrator | 2026-03-11 01:02:32.700394 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-11 01:02:32.700397 | orchestrator | Wednesday 11 March 2026 01:01:41 +0000 (0:00:08.233) 0:01:04.169 ******* 2026-03-11 01:02:32.700400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:32.700403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700410 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:02:32.700415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:32.700423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:32.700433 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:02:32.700436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:32.700443 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:02:32.700446 | orchestrator | 2026-03-11 01:02:32.700451 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-11 01:02:32.700454 | orchestrator | Wednesday 11 March 2026 01:01:42 +0000 (0:00:01.275) 0:01:05.444 ******* 2026-03-11 01:02:32.700462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:32.700465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:32.700469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:32.700477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:32.700504 | orchestrator | 2026-03-11 01:02:32.700509 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-11 01:02:32.700515 | orchestrator | Wednesday 11 March 2026 01:01:47 +0000 (0:00:04.347) 0:01:09.792 ******* 2026-03-11 01:02:32.700521 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:02:32.700526 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:02:32.700531 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:02:32.700537 | orchestrator | 2026-03-11 01:02:32.700542 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-11 01:02:32.700547 | orchestrator | Wednesday 11 March 2026 01:01:47 +0000 (0:00:00.341) 0:01:10.133 ******* 2026-03-11 01:02:32.700554 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:32.700558 | orchestrator | 2026-03-11 01:02:32.700563 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-11 01:02:32.700569 | orchestrator | Wednesday 11 March 2026 01:01:50 +0000 (0:00:02.547) 0:01:12.680 ******* 2026-03-11 01:02:32.700693 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:32.700700 | orchestrator | 2026-03-11 01:02:32.700705 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-11 01:02:32.700709 | orchestrator | Wednesday 11 March 2026 01:01:52 +0000 (0:00:02.475) 0:01:15.155 ******* 2026-03-11 01:02:32.700719 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:32.700724 | orchestrator | 2026-03-11 01:02:32.700729 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-11 01:02:32.700733 | orchestrator | Wednesday 11 March 2026 01:02:02 +0000 (0:00:09.854) 0:01:25.010 ******* 2026-03-11 01:02:32.700738 | orchestrator | 2026-03-11 01:02:32.700742 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-11 01:02:32.700747 | orchestrator | Wednesday 11 March 2026 01:02:02 +0000 (0:00:00.054) 0:01:25.064 ******* 2026-03-11 01:02:32.700752 | orchestrator | 2026-03-11 01:02:32.700757 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-11 01:02:32.700761 | orchestrator | Wednesday 11 March 2026 01:02:02 +0000 (0:00:00.051) 0:01:25.116 ******* 2026-03-11 01:02:32.700766 | orchestrator | 2026-03-11 01:02:32.700774 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-11 01:02:32.700778 | orchestrator | Wednesday 11 March 2026 01:02:02 +0000 (0:00:00.053) 0:01:25.170 ******* 2026-03-11 01:02:32.700782 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:02:32.700787 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:32.700791 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:02:32.700796 | orchestrator | 2026-03-11 01:02:32.700801 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-11 01:02:32.700805 | orchestrator | Wednesday 11 March 2026 01:02:13 +0000 (0:00:11.233) 0:01:36.403 ******* 2026-03-11 01:02:32.700810 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:02:32.700815 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:32.700820 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:02:32.700824 | orchestrator | 2026-03-11 01:02:32.700829 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-11 01:02:32.700834 | orchestrator | Wednesday 11 March 2026 01:02:23 +0000 (0:00:09.904) 0:01:46.308 ******* 2026-03-11 01:02:32.700845 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:32.700850 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:02:32.700855 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:02:32.700860 | orchestrator | 2026-03-11 01:02:32.700865 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:02:32.700870 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:02:32.700875 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 01:02:32.700881 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 01:02:32.700886 | orchestrator | 2026-03-11 01:02:32.700891 | orchestrator | 2026-03-11 01:02:32.700896 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:02:32.700901 | orchestrator | Wednesday 11 March 2026 01:02:30 +0000 (0:00:06.713) 0:01:53.022 ******* 2026-03-11 01:02:32.700907 | orchestrator | =============================================================================== 2026-03-11 01:02:32.700912 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.75s 2026-03-11 01:02:32.700917 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.23s 2026-03-11 01:02:32.700922 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.90s 2026-03-11 01:02:32.700928 | orchestrator | barbican : Running barbican bootstrap container ------------------------- 9.85s 2026-03-11 01:02:32.700933 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.23s 2026-03-11 01:02:32.700938 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.71s 2026-03-11 01:02:32.700943 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.53s 2026-03-11 01:02:32.700954 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.35s 2026-03-11 01:02:32.700960 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.04s 2026-03-11 01:02:32.700965 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.80s 2026-03-11 01:02:32.700971 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.66s 2026-03-11 01:02:32.700976 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.50s 2026-03-11 01:02:32.700981 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.21s 2026-03-11 01:02:32.700986 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.19s 2026-03-11 01:02:32.700991 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.72s 2026-03-11 01:02:32.700996 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.55s 2026-03-11 01:02:32.701001 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.48s 2026-03-11 01:02:32.701006 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.88s 2026-03-11 01:02:32.701011 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.27s 2026-03-11 01:02:32.701016 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.23s 2026-03-11 01:02:32.701021 | orchestrator | 2026-03-11 01:02:32 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:02:32.701027 | orchestrator | 2026-03-11 01:02:32 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:02:32.701133 | orchestrator | 2026-03-11 01:02:32 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:32.701141 | orchestrator | 2026-03-11 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:35.724833 | orchestrator | 2026-03-11 01:02:35 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:35.725031 | orchestrator | 2026-03-11 01:02:35 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:02:35.725876 | orchestrator | 2026-03-11 01:02:35 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:02:35.726571 | orchestrator | 2026-03-11 01:02:35 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:35.726600 | orchestrator | 2026-03-11 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:38.755871 | orchestrator | 2026-03-11 01:02:38 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:38.756435 | orchestrator | 2026-03-11 01:02:38 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:02:38.758122 | orchestrator | 2026-03-11 01:02:38 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:02:38.758659 | orchestrator | 2026-03-11 01:02:38 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:38.758685 | orchestrator | 2026-03-11 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:41.790085 | orchestrator | 2026-03-11 01:02:41 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:41.791183 | orchestrator | 2026-03-11 01:02:41 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:02:41.792812 | orchestrator | 2026-03-11 01:02:41 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:02:41.793993 | orchestrator | 2026-03-11 01:02:41 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:41.794047 | orchestrator | 2026-03-11 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:44.827280 | orchestrator | 2026-03-11 01:02:44 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:44.827715 | orchestrator | 2026-03-11 01:02:44 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:02:44.828418 | orchestrator | 2026-03-11 01:02:44 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:02:44.830044 | orchestrator | 2026-03-11 01:02:44 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:44.830078 | orchestrator | 2026-03-11 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:47.862926 | orchestrator | 2026-03-11 01:02:47 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:47.864584 | orchestrator | 2026-03-11 01:02:47 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:02:47.866248 | orchestrator | 2026-03-11 01:02:47 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:02:47.867954 | orchestrator | 2026-03-11 01:02:47 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:47.867985 | orchestrator | 2026-03-11 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:50.903929 | orchestrator | 2026-03-11 01:02:50 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:50.905558 | orchestrator | 2026-03-11 01:02:50 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:02:50.907081 | orchestrator | 2026-03-11 01:02:50 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:02:50.908768 | orchestrator | 2026-03-11 01:02:50 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:50.908891 | orchestrator | 2026-03-11 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:53.963464 | orchestrator | 2026-03-11 01:02:53 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:53.965238 | orchestrator | 2026-03-11 01:02:53 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:02:53.967541 | orchestrator | 2026-03-11 01:02:53 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:02:53.969179 | orchestrator | 2026-03-11 01:02:53 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:53.969362 | orchestrator | 2026-03-11 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:57.012002 | orchestrator | 2026-03-11 01:02:57 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:02:57.015954 | orchestrator | 2026-03-11 01:02:57 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:02:57.019124 | orchestrator | 2026-03-11 01:02:57 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:02:57.020526 | orchestrator | 2026-03-11 01:02:57 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:02:57.020668 | orchestrator | 2026-03-11 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:00.068242 | orchestrator | 2026-03-11 01:03:00 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:03:00.070143 | orchestrator | 2026-03-11 01:03:00 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:00.072080 | orchestrator | 2026-03-11 01:03:00 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:03:00.074479 | orchestrator | 2026-03-11 01:03:00 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:00.074520 | orchestrator | 2026-03-11 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:03.117103 | orchestrator | 2026-03-11 01:03:03 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:03:03.118256 | orchestrator | 2026-03-11 01:03:03 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:03.120646 | orchestrator | 2026-03-11 01:03:03 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:03:03.121710 | orchestrator | 2026-03-11 01:03:03 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:03.121737 | orchestrator | 2026-03-11 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:06.163450 | orchestrator | 2026-03-11 01:03:06 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:03:06.165088 | orchestrator | 2026-03-11 01:03:06 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:06.167990 | orchestrator | 2026-03-11 01:03:06 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:03:06.170214 | orchestrator | 2026-03-11 01:03:06 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:06.170553 | orchestrator | 2026-03-11 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:09.204117 | orchestrator | 2026-03-11 01:03:09 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:03:09.204389 | orchestrator | 2026-03-11 01:03:09 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:09.204954 | orchestrator | 2026-03-11 01:03:09 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:03:09.205684 | orchestrator | 2026-03-11 01:03:09 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:09.205744 | orchestrator | 2026-03-11 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:12.243031 | orchestrator | 2026-03-11 01:03:12 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:03:12.243845 | orchestrator | 2026-03-11 01:03:12 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:12.244668 | orchestrator | 2026-03-11 01:03:12 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:03:12.245794 | orchestrator | 2026-03-11 01:03:12 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:12.245828 | orchestrator | 2026-03-11 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:15.285526 | orchestrator | 2026-03-11 01:03:15 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:03:15.287544 | orchestrator | 2026-03-11 01:03:15 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:15.290827 | orchestrator | 2026-03-11 01:03:15 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:03:15.297414 | orchestrator | 2026-03-11 01:03:15 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:15.298108 | orchestrator | 2026-03-11 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:18.340885 | orchestrator | 2026-03-11 01:03:18 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:03:18.343385 | orchestrator | 2026-03-11 01:03:18 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:18.346052 | orchestrator | 2026-03-11 01:03:18 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:03:18.348181 | orchestrator | 2026-03-11 01:03:18 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:18.348357 | orchestrator | 2026-03-11 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:21.387240 | orchestrator | 2026-03-11 01:03:21 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:03:21.388738 | orchestrator | 2026-03-11 01:03:21 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:21.390259 | orchestrator | 2026-03-11 01:03:21 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:03:21.392924 | orchestrator | 2026-03-11 01:03:21 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:21.392969 | orchestrator | 2026-03-11 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:24.440618 | orchestrator | 2026-03-11 01:03:24 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:03:24.442748 | orchestrator | 2026-03-11 01:03:24 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:24.444534 | orchestrator | 2026-03-11 01:03:24 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:03:24.446350 | orchestrator | 2026-03-11 01:03:24 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:24.446391 | orchestrator | 2026-03-11 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:27.480997 | orchestrator | 2026-03-11 01:03:27 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state STARTED 2026-03-11 01:03:27.481315 | orchestrator | 2026-03-11 01:03:27 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:27.482960 | orchestrator | 2026-03-11 01:03:27 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:03:27.485606 | orchestrator | 2026-03-11 01:03:27 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:27.486151 | orchestrator | 2026-03-11 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:30.516782 | orchestrator | 2026-03-11 01:03:30 | INFO  | Task fe643394-f81a-4648-b19f-39a5faeb2185 is in state SUCCESS 2026-03-11 01:03:30.518139 | orchestrator | 2026-03-11 01:03:30.518206 | orchestrator | 2026-03-11 01:03:30.518213 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:03:30.518218 | orchestrator | 2026-03-11 01:03:30.518222 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:03:30.518227 | orchestrator | Wednesday 11 March 2026 01:00:37 +0000 (0:00:00.278) 0:00:00.278 ******* 2026-03-11 01:03:30.518257 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:03:30.518266 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:03:30.518273 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:03:30.518279 | orchestrator | 2026-03-11 01:03:30.518287 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:03:30.518291 | orchestrator | Wednesday 11 March 2026 01:00:37 +0000 (0:00:00.255) 0:00:00.533 ******* 2026-03-11 01:03:30.518302 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-11 01:03:30.518316 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-11 01:03:30.518325 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-11 01:03:30.518332 | orchestrator | 2026-03-11 01:03:30.518339 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-11 01:03:30.518363 | orchestrator | 2026-03-11 01:03:30.518370 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-11 01:03:30.518376 | orchestrator | Wednesday 11 March 2026 01:00:37 +0000 (0:00:00.392) 0:00:00.926 ******* 2026-03-11 01:03:30.518383 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:03:30.518389 | orchestrator | 2026-03-11 01:03:30.518396 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-11 01:03:30.518402 | orchestrator | Wednesday 11 March 2026 01:00:38 +0000 (0:00:00.582) 0:00:01.508 ******* 2026-03-11 01:03:30.518408 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-11 01:03:30.518414 | orchestrator | 2026-03-11 01:03:30.518422 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-11 01:03:30.518428 | orchestrator | Wednesday 11 March 2026 01:00:42 +0000 (0:00:04.181) 0:00:05.690 ******* 2026-03-11 01:03:30.518434 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-11 01:03:30.518441 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-11 01:03:30.518448 | orchestrator | 2026-03-11 01:03:30.518456 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-11 01:03:30.518463 | orchestrator | Wednesday 11 March 2026 01:00:48 +0000 (0:00:06.040) 0:00:11.730 ******* 2026-03-11 01:03:30.518470 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-11 01:03:30.518477 | orchestrator | 2026-03-11 01:03:30.518483 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-11 01:03:30.518491 | orchestrator | Wednesday 11 March 2026 01:00:52 +0000 (0:00:03.576) 0:00:15.307 ******* 2026-03-11 01:03:30.518495 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:03:30.518502 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-11 01:03:30.518508 | orchestrator | 2026-03-11 01:03:30.518524 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-11 01:03:30.518531 | orchestrator | Wednesday 11 March 2026 01:00:55 +0000 (0:00:03.615) 0:00:18.923 ******* 2026-03-11 01:03:30.518537 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:03:30.518545 | orchestrator | 2026-03-11 01:03:30.518552 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-11 01:03:30.518559 | orchestrator | Wednesday 11 March 2026 01:00:59 +0000 (0:00:03.199) 0:00:22.123 ******* 2026-03-11 01:03:30.518566 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-11 01:03:30.518572 | orchestrator | 2026-03-11 01:03:30.518579 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-11 01:03:30.518586 | orchestrator | Wednesday 11 March 2026 01:01:03 +0000 (0:00:03.924) 0:00:26.047 ******* 2026-03-11 01:03:30.518596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:30.518616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:30.518627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:30.518631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518727 | orchestrator | 2026-03-11 01:03:30.518731 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-11 01:03:30.518735 | orchestrator | Wednesday 11 March 2026 01:01:05 +0000 (0:00:02.592) 0:00:28.639 ******* 2026-03-11 01:03:30.518741 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:30.518748 | orchestrator | 2026-03-11 01:03:30.518755 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-11 01:03:30.518761 | orchestrator | Wednesday 11 March 2026 01:01:05 +0000 (0:00:00.105) 0:00:28.745 ******* 2026-03-11 01:03:30.518767 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:30.518774 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:30.518782 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:30.518789 | orchestrator | 2026-03-11 01:03:30.518796 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-11 01:03:30.518803 | orchestrator | Wednesday 11 March 2026 01:01:05 +0000 (0:00:00.246) 0:00:28.991 ******* 2026-03-11 01:03:30.518814 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:03:30.518822 | orchestrator | 2026-03-11 01:03:30.518828 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-11 01:03:30.518836 | orchestrator | Wednesday 11 March 2026 01:01:06 +0000 (0:00:00.585) 0:00:29.577 ******* 2026-03-11 01:03:30.518844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:30.518849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:30.518853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:30.518860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.518981 | orchestrator | 2026-03-11 01:03:30.518989 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-11 01:03:30.518996 | orchestrator | Wednesday 11 March 2026 01:01:12 +0000 (0:00:05.511) 0:00:35.089 ******* 2026-03-11 01:03:30.519004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:30.519016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:30.519020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:30.519072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:30.519084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519106 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:30.519114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519127 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:30.519139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:30.519146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:30.519151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519176 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:30.519181 | orchestrator | 2026-03-11 01:03:30.519188 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-11 01:03:30.519195 | orchestrator | Wednesday 11 March 2026 01:01:13 +0000 (0:00:01.106) 0:00:36.195 ******* 2026-03-11 01:03:30.519202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:30.519214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:30.519221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519347 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:30.519359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:30.519373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:30.519380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519415 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:30.519422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:30.519433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:30.519441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519477 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:30.519485 | orchestrator | 2026-03-11 01:03:30.519493 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-11 01:03:30.519500 | orchestrator | Wednesday 11 March 2026 01:01:14 +0000 (0:00:01.039) 0:00:37.235 ******* 2026-03-11 01:03:30.519507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:30.519519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:30.519527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:30.519543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519687 | orchestrator | 2026-03-11 01:03:30.519691 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-11 01:03:30.519696 | orchestrator | Wednesday 11 March 2026 01:01:20 +0000 (0:00:06.600) 0:00:43.835 ******* 2026-03-11 01:03:30.519700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:30.519707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:30.519715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:30.519719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519800 | orchestrator | 2026-03-11 01:03:30.519804 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-11 01:03:30.519808 | orchestrator | Wednesday 11 March 2026 01:01:41 +0000 (0:00:20.841) 0:01:04.677 ******* 2026-03-11 01:03:30.519812 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-11 01:03:30.519816 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-11 01:03:30.519820 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-11 01:03:30.519824 | orchestrator | 2026-03-11 01:03:30.519828 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-11 01:03:30.519832 | orchestrator | Wednesday 11 March 2026 01:01:48 +0000 (0:00:06.684) 0:01:11.361 ******* 2026-03-11 01:03:30.519836 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-11 01:03:30.519840 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-11 01:03:30.519844 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-11 01:03:30.519848 | orchestrator | 2026-03-11 01:03:30.519854 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-11 01:03:30.519858 | orchestrator | Wednesday 11 March 2026 01:01:51 +0000 (0:00:03.302) 0:01:14.664 ******* 2026-03-11 01:03:30.519866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:30.519871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:30.519875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:30.519881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.519958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.519977 | orchestrator | 2026-03-11 01:03:30.519981 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-11 01:03:30.519985 | orchestrator | Wednesday 11 March 2026 01:01:55 +0000 (0:00:03.589) 0:01:18.254 ******* 2026-03-11 01:03:30.520194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:30.520208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:30.520212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:30.520221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520353 | orchestrator | 2026-03-11 01:03:30.520357 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-11 01:03:30.520361 | orchestrator | Wednesday 11 March 2026 01:01:58 +0000 (0:00:02.896) 0:01:21.151 ******* 2026-03-11 01:03:30.520365 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:30.520370 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:30.520374 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:30.520378 | orchestrator | 2026-03-11 01:03:30.520382 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-11 01:03:30.520386 | orchestrator | Wednesday 11 March 2026 01:01:58 +0000 (0:00:00.437) 0:01:21.588 ******* 2026-03-11 01:03:30.520393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:30.520398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:30.520402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520424 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:30.520431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:30.520435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:30.520439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520461 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:30.520467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:30.520472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:30.520476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:30.520498 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:30.520502 | orchestrator | 2026-03-11 01:03:30.520506 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-11 01:03:30.520510 | orchestrator | Wednesday 11 March 2026 01:02:00 +0000 (0:00:01.507) 0:01:23.096 ******* 2026-03-11 01:03:30.520516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:30.520521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:30.520525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:30.520535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:30.520614 | orchestrator | 2026-03-11 01:03:30.520618 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-11 01:03:30.520623 | orchestrator | Wednesday 11 March 2026 01:02:04 +0000 (0:00:04.422) 0:01:27.518 ******* 2026-03-11 01:03:30.520630 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:30.520636 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:30.520646 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:30.520654 | orchestrator | 2026-03-11 01:03:30.520660 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-11 01:03:30.520666 | orchestrator | Wednesday 11 March 2026 01:02:04 +0000 (0:00:00.397) 0:01:27.916 ******* 2026-03-11 01:03:30.520674 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-11 01:03:30.520681 | orchestrator | 2026-03-11 01:03:30.520687 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-11 01:03:30.520694 | orchestrator | Wednesday 11 March 2026 01:02:07 +0000 (0:00:02.269) 0:01:30.186 ******* 2026-03-11 01:03:30.520700 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 01:03:30.520707 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-11 01:03:30.520713 | orchestrator | 2026-03-11 01:03:30.520720 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-11 01:03:30.520732 | orchestrator | Wednesday 11 March 2026 01:02:10 +0000 (0:00:03.024) 0:01:33.210 ******* 2026-03-11 01:03:30.520739 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:30.520746 | orchestrator | 2026-03-11 01:03:30.520753 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-11 01:03:30.520760 | orchestrator | Wednesday 11 March 2026 01:02:24 +0000 (0:00:14.576) 0:01:47.787 ******* 2026-03-11 01:03:30.520773 | orchestrator | 2026-03-11 01:03:30.520777 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-11 01:03:30.520781 | orchestrator | Wednesday 11 March 2026 01:02:24 +0000 (0:00:00.192) 0:01:47.979 ******* 2026-03-11 01:03:30.520785 | orchestrator | 2026-03-11 01:03:30.520789 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-11 01:03:30.520793 | orchestrator | Wednesday 11 March 2026 01:02:25 +0000 (0:00:00.192) 0:01:48.172 ******* 2026-03-11 01:03:30.520797 | orchestrator | 2026-03-11 01:03:30.520801 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-11 01:03:30.520805 | orchestrator | Wednesday 11 March 2026 01:02:25 +0000 (0:00:00.269) 0:01:48.441 ******* 2026-03-11 01:03:30.520809 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:30.520813 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:30.520817 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:30.520822 | orchestrator | 2026-03-11 01:03:30.520826 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-11 01:03:30.520831 | orchestrator | Wednesday 11 March 2026 01:02:35 +0000 (0:00:10.192) 0:01:58.634 ******* 2026-03-11 01:03:30.520836 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:30.520840 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:30.520845 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:30.520849 | orchestrator | 2026-03-11 01:03:30.520854 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-11 01:03:30.520858 | orchestrator | Wednesday 11 March 2026 01:02:47 +0000 (0:00:11.601) 0:02:10.235 ******* 2026-03-11 01:03:30.520863 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:30.520868 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:30.520872 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:30.520877 | orchestrator | 2026-03-11 01:03:30.520882 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-11 01:03:30.520886 | orchestrator | Wednesday 11 March 2026 01:02:56 +0000 (0:00:09.765) 0:02:20.001 ******* 2026-03-11 01:03:30.520891 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:30.520895 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:30.520900 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:30.520905 | orchestrator | 2026-03-11 01:03:30.520910 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-11 01:03:30.520914 | orchestrator | Wednesday 11 March 2026 01:03:07 +0000 (0:00:10.327) 0:02:30.328 ******* 2026-03-11 01:03:30.520919 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:30.520923 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:30.520928 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:30.520932 | orchestrator | 2026-03-11 01:03:30.520937 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-11 01:03:30.520942 | orchestrator | Wednesday 11 March 2026 01:03:16 +0000 (0:00:08.790) 0:02:39.118 ******* 2026-03-11 01:03:30.520946 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:30.520951 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:30.520957 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:30.520964 | orchestrator | 2026-03-11 01:03:30.520973 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-11 01:03:30.520979 | orchestrator | Wednesday 11 March 2026 01:03:21 +0000 (0:00:05.529) 0:02:44.648 ******* 2026-03-11 01:03:30.520983 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:30.520988 | orchestrator | 2026-03-11 01:03:30.520993 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:03:30.520999 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:03:30.521004 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 01:03:30.521013 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 01:03:30.521017 | orchestrator | 2026-03-11 01:03:30.521024 | orchestrator | 2026-03-11 01:03:30.521031 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:03:30.521039 | orchestrator | Wednesday 11 March 2026 01:03:28 +0000 (0:00:06.859) 0:02:51.508 ******* 2026-03-11 01:03:30.521046 | orchestrator | =============================================================================== 2026-03-11 01:03:30.521052 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.84s 2026-03-11 01:03:30.521058 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.58s 2026-03-11 01:03:30.521065 | orchestrator | designate : Restart designate-api container ---------------------------- 11.60s 2026-03-11 01:03:30.521072 | orchestrator | designate : Restart designate-producer container ----------------------- 10.33s 2026-03-11 01:03:30.521078 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.19s 2026-03-11 01:03:30.521085 | orchestrator | designate : Restart designate-central container ------------------------- 9.77s 2026-03-11 01:03:30.521092 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.79s 2026-03-11 01:03:30.521100 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.86s 2026-03-11 01:03:30.521106 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.68s 2026-03-11 01:03:30.521110 | orchestrator | designate : Copying over config.json files for services ----------------- 6.60s 2026-03-11 01:03:30.521119 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.04s 2026-03-11 01:03:30.521124 | orchestrator | designate : Restart designate-worker container -------------------------- 5.53s 2026-03-11 01:03:30.521129 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.51s 2026-03-11 01:03:30.521133 | orchestrator | designate : Check designate containers ---------------------------------- 4.42s 2026-03-11 01:03:30.521138 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.18s 2026-03-11 01:03:30.521142 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.92s 2026-03-11 01:03:30.521147 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.62s 2026-03-11 01:03:30.521152 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.59s 2026-03-11 01:03:30.521156 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.58s 2026-03-11 01:03:30.521161 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.30s 2026-03-11 01:03:30.521166 | orchestrator | 2026-03-11 01:03:30 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:30.521989 | orchestrator | 2026-03-11 01:03:30 | INFO  | Task 8afd3cea-3830-436c-bdb0-eab93c94a58f is in state STARTED 2026-03-11 01:03:30.525393 | orchestrator | 2026-03-11 01:03:30 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:03:30.527111 | orchestrator | 2026-03-11 01:03:30 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:30.527239 | orchestrator | 2026-03-11 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:33.568342 | orchestrator | 2026-03-11 01:03:33 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:33.569672 | orchestrator | 2026-03-11 01:03:33 | INFO  | Task 8afd3cea-3830-436c-bdb0-eab93c94a58f is in state STARTED 2026-03-11 01:03:33.570642 | orchestrator | 2026-03-11 01:03:33 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state STARTED 2026-03-11 01:03:33.571899 | orchestrator | 2026-03-11 01:03:33 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:33.572768 | orchestrator | 2026-03-11 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:36.596308 | orchestrator | 2026-03-11 01:03:36 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:36.596582 | orchestrator | 2026-03-11 01:03:36 | INFO  | Task 8afd3cea-3830-436c-bdb0-eab93c94a58f is in state SUCCESS 2026-03-11 01:03:36.599052 | orchestrator | 2026-03-11 01:03:36 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:03:36.599753 | orchestrator | 2026-03-11 01:03:36 | INFO  | Task 5b772cff-8187-42b2-8acc-b0d5bbececa8 is in state STARTED 2026-03-11 01:03:36.600514 | orchestrator | 2026-03-11 01:03:36 | INFO  | Task 33e6fa12-b1f0-4b2d-80cd-795a8d228852 is in state SUCCESS 2026-03-11 01:03:36.601470 | orchestrator | 2026-03-11 01:03:36.601495 | orchestrator | 2026-03-11 01:03:36.601500 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:03:36.601505 | orchestrator | 2026-03-11 01:03:36.601510 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:03:36.601514 | orchestrator | Wednesday 11 March 2026 01:03:32 +0000 (0:00:00.156) 0:00:00.156 ******* 2026-03-11 01:03:36.601518 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:03:36.601523 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:03:36.601527 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:03:36.601531 | orchestrator | 2026-03-11 01:03:36.601535 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:03:36.601538 | orchestrator | Wednesday 11 March 2026 01:03:32 +0000 (0:00:00.312) 0:00:00.468 ******* 2026-03-11 01:03:36.601542 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-11 01:03:36.601547 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-11 01:03:36.601551 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-11 01:03:36.601555 | orchestrator | 2026-03-11 01:03:36.601558 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-11 01:03:36.601562 | orchestrator | 2026-03-11 01:03:36.601566 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-11 01:03:36.601570 | orchestrator | Wednesday 11 March 2026 01:03:33 +0000 (0:00:00.772) 0:00:01.240 ******* 2026-03-11 01:03:36.601574 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:03:36.601578 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:03:36.601581 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:03:36.601585 | orchestrator | 2026-03-11 01:03:36.601589 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:03:36.601593 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:03:36.601599 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:03:36.601602 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:03:36.601606 | orchestrator | 2026-03-11 01:03:36.601610 | orchestrator | 2026-03-11 01:03:36.601614 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:03:36.601617 | orchestrator | Wednesday 11 March 2026 01:03:34 +0000 (0:00:01.082) 0:00:02.324 ******* 2026-03-11 01:03:36.601621 | orchestrator | =============================================================================== 2026-03-11 01:03:36.601625 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.08s 2026-03-11 01:03:36.601629 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2026-03-11 01:03:36.601632 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-03-11 01:03:36.601636 | orchestrator | 2026-03-11 01:03:36.601640 | orchestrator | 2026-03-11 01:03:36.601643 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:03:36.601666 | orchestrator | 2026-03-11 01:03:36.601670 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:03:36.601674 | orchestrator | Wednesday 11 March 2026 01:02:31 +0000 (0:00:00.328) 0:00:00.328 ******* 2026-03-11 01:03:36.601678 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:03:36.601682 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:03:36.601685 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:03:36.601689 | orchestrator | 2026-03-11 01:03:36.601693 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:03:36.601697 | orchestrator | Wednesday 11 March 2026 01:02:32 +0000 (0:00:00.449) 0:00:00.777 ******* 2026-03-11 01:03:36.601701 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-11 01:03:36.601705 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-11 01:03:36.601708 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-11 01:03:36.601712 | orchestrator | 2026-03-11 01:03:36.601716 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-11 01:03:36.601720 | orchestrator | 2026-03-11 01:03:36.601723 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-11 01:03:36.601727 | orchestrator | Wednesday 11 March 2026 01:02:32 +0000 (0:00:00.626) 0:00:01.403 ******* 2026-03-11 01:03:36.601731 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:03:36.601735 | orchestrator | 2026-03-11 01:03:36.601738 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-11 01:03:36.601742 | orchestrator | Wednesday 11 March 2026 01:02:33 +0000 (0:00:00.766) 0:00:02.170 ******* 2026-03-11 01:03:36.601746 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-11 01:03:36.601750 | orchestrator | 2026-03-11 01:03:36.601753 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-11 01:03:36.601757 | orchestrator | Wednesday 11 March 2026 01:02:36 +0000 (0:00:03.021) 0:00:05.194 ******* 2026-03-11 01:03:36.601761 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-11 01:03:36.601765 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-11 01:03:36.601769 | orchestrator | 2026-03-11 01:03:36.601772 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-11 01:03:36.601791 | orchestrator | Wednesday 11 March 2026 01:02:43 +0000 (0:00:06.489) 0:00:11.684 ******* 2026-03-11 01:03:36.601795 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:03:36.601799 | orchestrator | 2026-03-11 01:03:36.601804 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-11 01:03:36.601808 | orchestrator | Wednesday 11 March 2026 01:02:46 +0000 (0:00:03.370) 0:00:15.054 ******* 2026-03-11 01:03:36.601819 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:03:36.601823 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-11 01:03:36.601826 | orchestrator | 2026-03-11 01:03:36.601830 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-11 01:03:36.601834 | orchestrator | Wednesday 11 March 2026 01:02:49 +0000 (0:00:03.392) 0:00:18.446 ******* 2026-03-11 01:03:36.601838 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:03:36.601842 | orchestrator | 2026-03-11 01:03:36.601845 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-11 01:03:36.601849 | orchestrator | Wednesday 11 March 2026 01:02:52 +0000 (0:00:03.059) 0:00:21.506 ******* 2026-03-11 01:03:36.601853 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-11 01:03:36.601857 | orchestrator | 2026-03-11 01:03:36.601860 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-11 01:03:36.601864 | orchestrator | Wednesday 11 March 2026 01:02:56 +0000 (0:00:03.184) 0:00:24.690 ******* 2026-03-11 01:03:36.601872 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:36.601879 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:36.601886 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:36.601892 | orchestrator | 2026-03-11 01:03:36.601901 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-11 01:03:36.601908 | orchestrator | Wednesday 11 March 2026 01:02:56 +0000 (0:00:00.296) 0:00:24.987 ******* 2026-03-11 01:03:36.601928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:03:36.601946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:03:36.601953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:03:36.601959 | orchestrator | 2026-03-11 01:03:36.601965 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-11 01:03:36.601975 | orchestrator | Wednesday 11 March 2026 01:02:57 +0000 (0:00:00.830) 0:00:25.817 ******* 2026-03-11 01:03:36.601981 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:36.601986 | orchestrator | 2026-03-11 01:03:36.601992 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-11 01:03:36.602001 | orchestrator | Wednesday 11 March 2026 01:02:57 +0000 (0:00:00.144) 0:00:25.962 ******* 2026-03-11 01:03:36.602007 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:36.602051 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:36.602059 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:36.602065 | orchestrator | 2026-03-11 01:03:36.602071 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-11 01:03:36.602083 | orchestrator | Wednesday 11 March 2026 01:02:57 +0000 (0:00:00.526) 0:00:26.489 ******* 2026-03-11 01:03:36.602088 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:03:36.602094 | orchestrator | 2026-03-11 01:03:36.602099 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-11 01:03:36.602105 | orchestrator | Wednesday 11 March 2026 01:02:58 +0000 (0:00:00.619) 0:00:27.108 ******* 2026-03-11 01:03:36.602113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:03:36.602121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:03:36.602128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:03:36.602135 | orchestrator | 2026-03-11 01:03:36.602142 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-11 01:03:36.602149 | orchestrator | Wednesday 11 March 2026 01:03:00 +0000 (0:00:01.566) 0:00:28.675 ******* 2026-03-11 01:03:36.602168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:03:36.602179 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:36.602184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:03:36.602189 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:36.602194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:03:36.602198 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:36.602203 | orchestrator | 2026-03-11 01:03:36.602207 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-11 01:03:36.602212 | orchestrator | Wednesday 11 March 2026 01:03:00 +0000 (0:00:00.888) 0:00:29.563 ******* 2026-03-11 01:03:36.602216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:03:36.602221 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:36.602255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:03:36.602264 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:36.602269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:03:36.602274 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:36.602278 | orchestrator | 2026-03-11 01:03:36.602283 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-11 01:03:36.602287 | orchestrator | Wednesday 11 March 2026 01:03:01 +0000 (0:00:00.692) 0:00:30.255 ******* 2026-03-11 01:03:36.602292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:03:36.602297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:03:36.602309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:03:36.602318 | orchestrator | 2026-03-11 01:03:36.602323 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-11 01:03:36.602327 | orchestrator | Wednesday 11 March 2026 01:03:03 +0000 (0:00:01.371) 0:00:31.627 ******* 2026-03-11 01:03:36.602332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:03:36.602336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:03:36.602341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:03:36.602346 | orchestrator | 2026-03-11 01:03:36.602350 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-11 01:03:36.602355 | orchestrator | Wednesday 11 March 2026 01:03:05 +0000 (0:00:02.513) 0:00:34.140 ******* 2026-03-11 01:03:36.602363 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-11 01:03:36.602368 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-11 01:03:36.602371 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-11 01:03:36.602375 | orchestrator | 2026-03-11 01:03:36.602379 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-11 01:03:36.602383 | orchestrator | Wednesday 11 March 2026 01:03:06 +0000 (0:00:01.415) 0:00:35.555 ******* 2026-03-11 01:03:36.602387 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:36.602391 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:36.602394 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:36.602398 | orchestrator | 2026-03-11 01:03:36.602402 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-11 01:03:36.602411 | orchestrator | Wednesday 11 March 2026 01:03:08 +0000 (0:00:01.300) 0:00:36.855 ******* 2026-03-11 01:03:36.602423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:03:36.602432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:03:36.602440 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:36.602450 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:36.602456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:03:36.602468 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:36.602474 | orchestrator | 2026-03-11 01:03:36.602481 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-11 01:03:36.602488 | orchestrator | Wednesday 11 March 2026 01:03:08 +0000 (0:00:00.575) 0:00:37.430 ******* 2026-03-11 01:03:36.602496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:03:36.602511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:03:36.602518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:03:36.602525 | orchestrator | 2026-03-11 01:03:36.602531 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-11 01:03:36.602537 | orchestrator | Wednesday 11 March 2026 01:03:10 +0000 (0:00:01.239) 0:00:38.669 ******* 2026-03-11 01:03:36.602542 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:36.602548 | orchestrator | 2026-03-11 01:03:36.602554 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-11 01:03:36.602560 | orchestrator | Wednesday 11 March 2026 01:03:12 +0000 (0:00:02.380) 0:00:41.050 ******* 2026-03-11 01:03:36.602566 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:36.602572 | orchestrator | 2026-03-11 01:03:36.602578 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-11 01:03:36.602584 | orchestrator | Wednesday 11 March 2026 01:03:14 +0000 (0:00:02.225) 0:00:43.275 ******* 2026-03-11 01:03:36.602591 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:36.602603 | orchestrator | 2026-03-11 01:03:36.602609 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-11 01:03:36.602615 | orchestrator | Wednesday 11 March 2026 01:03:28 +0000 (0:00:13.899) 0:00:57.175 ******* 2026-03-11 01:03:36.602622 | orchestrator | 2026-03-11 01:03:36.602626 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-11 01:03:36.602630 | orchestrator | Wednesday 11 March 2026 01:03:28 +0000 (0:00:00.086) 0:00:57.262 ******* 2026-03-11 01:03:36.602634 | orchestrator | 2026-03-11 01:03:36.602638 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-11 01:03:36.602641 | orchestrator | Wednesday 11 March 2026 01:03:28 +0000 (0:00:00.068) 0:00:57.331 ******* 2026-03-11 01:03:36.602645 | orchestrator | 2026-03-11 01:03:36.602649 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-11 01:03:36.602653 | orchestrator | Wednesday 11 March 2026 01:03:28 +0000 (0:00:00.072) 0:00:57.403 ******* 2026-03-11 01:03:36.602657 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:36.602661 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:36.602664 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:36.602668 | orchestrator | 2026-03-11 01:03:36.602672 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:03:36.602677 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 01:03:36.602682 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 01:03:36.602686 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 01:03:36.602690 | orchestrator | 2026-03-11 01:03:36.602694 | orchestrator | 2026-03-11 01:03:36.602698 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:03:36.602702 | orchestrator | Wednesday 11 March 2026 01:03:33 +0000 (0:00:04.562) 0:01:01.965 ******* 2026-03-11 01:03:36.602705 | orchestrator | =============================================================================== 2026-03-11 01:03:36.602709 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.90s 2026-03-11 01:03:36.602713 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.49s 2026-03-11 01:03:36.602723 | orchestrator | placement : Restart placement-api container ----------------------------- 4.56s 2026-03-11 01:03:36.602727 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.39s 2026-03-11 01:03:36.602731 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.37s 2026-03-11 01:03:36.602735 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.18s 2026-03-11 01:03:36.602742 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.06s 2026-03-11 01:03:36.602746 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.02s 2026-03-11 01:03:36.602750 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.51s 2026-03-11 01:03:36.602754 | orchestrator | placement : Creating placement databases -------------------------------- 2.38s 2026-03-11 01:03:36.602757 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.23s 2026-03-11 01:03:36.602761 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.57s 2026-03-11 01:03:36.602765 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.42s 2026-03-11 01:03:36.602769 | orchestrator | placement : Copying over config.json files for services ----------------- 1.37s 2026-03-11 01:03:36.602772 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.30s 2026-03-11 01:03:36.602776 | orchestrator | placement : Check placement containers ---------------------------------- 1.24s 2026-03-11 01:03:36.602784 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.89s 2026-03-11 01:03:36.602789 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.83s 2026-03-11 01:03:36.602792 | orchestrator | placement : include_tasks ----------------------------------------------- 0.77s 2026-03-11 01:03:36.602796 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.69s 2026-03-11 01:03:36.602800 | orchestrator | 2026-03-11 01:03:36 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:36.602804 | orchestrator | 2026-03-11 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:39.631650 | orchestrator | 2026-03-11 01:03:39 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:39.632033 | orchestrator | 2026-03-11 01:03:39 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:03:39.632697 | orchestrator | 2026-03-11 01:03:39 | INFO  | Task 5b772cff-8187-42b2-8acc-b0d5bbececa8 is in state STARTED 2026-03-11 01:03:39.633437 | orchestrator | 2026-03-11 01:03:39 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:39.633454 | orchestrator | 2026-03-11 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:42.675744 | orchestrator | 2026-03-11 01:03:42 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:42.675808 | orchestrator | 2026-03-11 01:03:42 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:03:42.675817 | orchestrator | 2026-03-11 01:03:42 | INFO  | Task 5b772cff-8187-42b2-8acc-b0d5bbececa8 is in state STARTED 2026-03-11 01:03:42.676288 | orchestrator | 2026-03-11 01:03:42 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:42.676341 | orchestrator | 2026-03-11 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:45.715532 | orchestrator | 2026-03-11 01:03:45 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:45.719057 | orchestrator | 2026-03-11 01:03:45 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:03:45.721555 | orchestrator | 2026-03-11 01:03:45 | INFO  | Task 5b772cff-8187-42b2-8acc-b0d5bbececa8 is in state STARTED 2026-03-11 01:03:45.724480 | orchestrator | 2026-03-11 01:03:45 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:45.724532 | orchestrator | 2026-03-11 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:48.766984 | orchestrator | 2026-03-11 01:03:48 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:48.769493 | orchestrator | 2026-03-11 01:03:48 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:03:48.771556 | orchestrator | 2026-03-11 01:03:48 | INFO  | Task 5b772cff-8187-42b2-8acc-b0d5bbececa8 is in state STARTED 2026-03-11 01:03:48.772299 | orchestrator | 2026-03-11 01:03:48 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:48.772446 | orchestrator | 2026-03-11 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:51.825325 | orchestrator | 2026-03-11 01:03:51 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:51.825699 | orchestrator | 2026-03-11 01:03:51 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:03:51.826438 | orchestrator | 2026-03-11 01:03:51 | INFO  | Task 5b772cff-8187-42b2-8acc-b0d5bbececa8 is in state STARTED 2026-03-11 01:03:51.827844 | orchestrator | 2026-03-11 01:03:51 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:51.827940 | orchestrator | 2026-03-11 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:54.856374 | orchestrator | 2026-03-11 01:03:54 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:54.857593 | orchestrator | 2026-03-11 01:03:54 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:03:54.858239 | orchestrator | 2026-03-11 01:03:54 | INFO  | Task 5b772cff-8187-42b2-8acc-b0d5bbececa8 is in state STARTED 2026-03-11 01:03:54.859308 | orchestrator | 2026-03-11 01:03:54 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:54.859331 | orchestrator | 2026-03-11 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:57.882053 | orchestrator | 2026-03-11 01:03:57 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:03:57.883037 | orchestrator | 2026-03-11 01:03:57 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:03:57.883812 | orchestrator | 2026-03-11 01:03:57 | INFO  | Task 5b772cff-8187-42b2-8acc-b0d5bbececa8 is in state STARTED 2026-03-11 01:03:57.884553 | orchestrator | 2026-03-11 01:03:57 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:03:57.884728 | orchestrator | 2026-03-11 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:01.105740 | orchestrator | 2026-03-11 01:04:01 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:04:01.106376 | orchestrator | 2026-03-11 01:04:01 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:01.111917 | orchestrator | 2026-03-11 01:04:01 | INFO  | Task 5b772cff-8187-42b2-8acc-b0d5bbececa8 is in state STARTED 2026-03-11 01:04:01.112609 | orchestrator | 2026-03-11 01:04:01 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:04:01.112651 | orchestrator | 2026-03-11 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:04.138198 | orchestrator | 2026-03-11 01:04:04 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:04:04.140052 | orchestrator | 2026-03-11 01:04:04 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:04.140742 | orchestrator | 2026-03-11 01:04:04 | INFO  | Task 5b772cff-8187-42b2-8acc-b0d5bbececa8 is in state STARTED 2026-03-11 01:04:04.141244 | orchestrator | 2026-03-11 01:04:04 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:04:04.141314 | orchestrator | 2026-03-11 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:07.180541 | orchestrator | 2026-03-11 01:04:07 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:04:07.180926 | orchestrator | 2026-03-11 01:04:07 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:07.182078 | orchestrator | 2026-03-11 01:04:07 | INFO  | Task 5b772cff-8187-42b2-8acc-b0d5bbececa8 is in state STARTED 2026-03-11 01:04:07.182852 | orchestrator | 2026-03-11 01:04:07 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:04:07.183112 | orchestrator | 2026-03-11 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:10.225022 | orchestrator | 2026-03-11 01:04:10 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:04:10.225729 | orchestrator | 2026-03-11 01:04:10 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:10.226687 | orchestrator | 2026-03-11 01:04:10 | INFO  | Task 5b772cff-8187-42b2-8acc-b0d5bbececa8 is in state STARTED 2026-03-11 01:04:10.227150 | orchestrator | 2026-03-11 01:04:10 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:04:10.227477 | orchestrator | 2026-03-11 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:13.285815 | orchestrator | 2026-03-11 01:04:13 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:04:13.286319 | orchestrator | 2026-03-11 01:04:13 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:13.286352 | orchestrator | 2026-03-11 01:04:13 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:13.286357 | orchestrator | 2026-03-11 01:04:13 | INFO  | Task 5b772cff-8187-42b2-8acc-b0d5bbececa8 is in state SUCCESS 2026-03-11 01:04:13.286360 | orchestrator | 2026-03-11 01:04:13 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:04:13.286364 | orchestrator | 2026-03-11 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:16.356835 | orchestrator | 2026-03-11 01:04:16 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:04:16.356897 | orchestrator | 2026-03-11 01:04:16 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:16.356905 | orchestrator | 2026-03-11 01:04:16 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:16.356911 | orchestrator | 2026-03-11 01:04:16 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:04:16.356916 | orchestrator | 2026-03-11 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:19.335483 | orchestrator | 2026-03-11 01:04:19 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state STARTED 2026-03-11 01:04:19.337427 | orchestrator | 2026-03-11 01:04:19 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:19.341265 | orchestrator | 2026-03-11 01:04:19 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:19.342560 | orchestrator | 2026-03-11 01:04:19 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:04:19.342856 | orchestrator | 2026-03-11 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:22.394948 | orchestrator | 2026-03-11 01:04:22.395008 | orchestrator | 2026-03-11 01:04:22.395017 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:04:22.395023 | orchestrator | 2026-03-11 01:04:22.395029 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:04:22.395035 | orchestrator | Wednesday 11 March 2026 01:03:39 +0000 (0:00:00.305) 0:00:00.305 ******* 2026-03-11 01:04:22.395040 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:04:22.395047 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:04:22.395052 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:04:22.395058 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:04:22.395063 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:04:22.395069 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:04:22.395074 | orchestrator | ok: [testbed-manager] 2026-03-11 01:04:22.395080 | orchestrator | 2026-03-11 01:04:22.395085 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:04:22.395091 | orchestrator | Wednesday 11 March 2026 01:03:40 +0000 (0:00:00.741) 0:00:01.046 ******* 2026-03-11 01:04:22.395134 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-11 01:04:22.395142 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-11 01:04:22.395148 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-11 01:04:22.395179 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-11 01:04:22.395186 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-11 01:04:22.395192 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-11 01:04:22.395199 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-11 01:04:22.395204 | orchestrator | 2026-03-11 01:04:22.395210 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-11 01:04:22.395215 | orchestrator | 2026-03-11 01:04:22.395221 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-11 01:04:22.395227 | orchestrator | Wednesday 11 March 2026 01:03:40 +0000 (0:00:00.621) 0:00:01.668 ******* 2026-03-11 01:04:22.395233 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-11 01:04:22.395239 | orchestrator | 2026-03-11 01:04:22.395245 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-11 01:04:22.395251 | orchestrator | Wednesday 11 March 2026 01:03:41 +0000 (0:00:01.259) 0:00:02.927 ******* 2026-03-11 01:04:22.395256 | orchestrator | changed: [testbed-node-3] => (item=swift (object-store)) 2026-03-11 01:04:22.395262 | orchestrator | 2026-03-11 01:04:22.395268 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-11 01:04:22.395731 | orchestrator | Wednesday 11 March 2026 01:03:45 +0000 (0:00:03.504) 0:00:06.432 ******* 2026-03-11 01:04:22.395758 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-11 01:04:22.395772 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-11 01:04:22.395779 | orchestrator | 2026-03-11 01:04:22.395784 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-11 01:04:22.395790 | orchestrator | Wednesday 11 March 2026 01:03:51 +0000 (0:00:06.028) 0:00:12.461 ******* 2026-03-11 01:04:22.395796 | orchestrator | ok: [testbed-node-3] => (item=service) 2026-03-11 01:04:22.395802 | orchestrator | 2026-03-11 01:04:22.395808 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-11 01:04:22.395822 | orchestrator | Wednesday 11 March 2026 01:03:54 +0000 (0:00:02.758) 0:00:15.219 ******* 2026-03-11 01:04:22.395828 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:04:22.395834 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service) 2026-03-11 01:04:22.395839 | orchestrator | 2026-03-11 01:04:22.395845 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-11 01:04:22.395850 | orchestrator | Wednesday 11 March 2026 01:03:57 +0000 (0:00:03.591) 0:00:18.811 ******* 2026-03-11 01:04:22.395855 | orchestrator | ok: [testbed-node-3] => (item=admin) 2026-03-11 01:04:22.395861 | orchestrator | changed: [testbed-node-3] => (item=ResellerAdmin) 2026-03-11 01:04:22.395866 | orchestrator | 2026-03-11 01:04:22.395872 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-11 01:04:22.395877 | orchestrator | Wednesday 11 March 2026 01:04:04 +0000 (0:00:06.213) 0:00:25.024 ******* 2026-03-11 01:04:22.395882 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service -> admin) 2026-03-11 01:04:22.395887 | orchestrator | 2026-03-11 01:04:22.395893 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:04:22.395899 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:04:22.395905 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:04:22.395910 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:04:22.395925 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:04:22.395931 | orchestrator | testbed-node-3 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:04:22.395948 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:04:22.395954 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:04:22.395964 | orchestrator | 2026-03-11 01:04:22.395969 | orchestrator | 2026-03-11 01:04:22.395975 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:04:22.395980 | orchestrator | Wednesday 11 March 2026 01:04:10 +0000 (0:00:06.571) 0:00:31.595 ******* 2026-03-11 01:04:22.395985 | orchestrator | =============================================================================== 2026-03-11 01:04:22.395991 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.57s 2026-03-11 01:04:22.395996 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.21s 2026-03-11 01:04:22.396002 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.03s 2026-03-11 01:04:22.396007 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.59s 2026-03-11 01:04:22.396013 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.50s 2026-03-11 01:04:22.396018 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.76s 2026-03-11 01:04:22.396024 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.26s 2026-03-11 01:04:22.396029 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.74s 2026-03-11 01:04:22.396034 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2026-03-11 01:04:22.396039 | orchestrator | 2026-03-11 01:04:22.396045 | orchestrator | 2026-03-11 01:04:22 | INFO  | Task bd20a2c0-d2c5-4c27-b71b-fa922445be1c is in state SUCCESS 2026-03-11 01:04:22.396050 | orchestrator | 2026-03-11 01:04:22.396056 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:04:22.396061 | orchestrator | 2026-03-11 01:04:22.396067 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:04:22.396072 | orchestrator | Wednesday 11 March 2026 01:02:35 +0000 (0:00:00.258) 0:00:00.258 ******* 2026-03-11 01:04:22.396077 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:04:22.396083 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:04:22.396088 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:04:22.396094 | orchestrator | 2026-03-11 01:04:22.396099 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:04:22.396105 | orchestrator | Wednesday 11 March 2026 01:02:35 +0000 (0:00:00.341) 0:00:00.600 ******* 2026-03-11 01:04:22.396110 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-11 01:04:22.396304 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-11 01:04:22.396310 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-11 01:04:22.396315 | orchestrator | 2026-03-11 01:04:22.396320 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-11 01:04:22.396326 | orchestrator | 2026-03-11 01:04:22.396332 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-11 01:04:22.396337 | orchestrator | Wednesday 11 March 2026 01:02:36 +0000 (0:00:00.809) 0:00:01.410 ******* 2026-03-11 01:04:22.396343 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:04:22.396348 | orchestrator | 2026-03-11 01:04:22.396354 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-11 01:04:22.396359 | orchestrator | Wednesday 11 March 2026 01:02:37 +0000 (0:00:01.121) 0:00:02.531 ******* 2026-03-11 01:04:22.396375 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-11 01:04:22.396381 | orchestrator | 2026-03-11 01:04:22.396386 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-11 01:04:22.396392 | orchestrator | Wednesday 11 March 2026 01:02:41 +0000 (0:00:03.380) 0:00:05.911 ******* 2026-03-11 01:04:22.396397 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-11 01:04:22.396403 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-11 01:04:22.396408 | orchestrator | 2026-03-11 01:04:22.396413 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-11 01:04:22.396435 | orchestrator | Wednesday 11 March 2026 01:02:47 +0000 (0:00:06.350) 0:00:12.262 ******* 2026-03-11 01:04:22.396440 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:04:22.396446 | orchestrator | 2026-03-11 01:04:22.396451 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-11 01:04:22.396457 | orchestrator | Wednesday 11 March 2026 01:02:50 +0000 (0:00:02.696) 0:00:14.958 ******* 2026-03-11 01:04:22.396463 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:04:22.396469 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-11 01:04:22.396475 | orchestrator | 2026-03-11 01:04:22.396480 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-11 01:04:22.396486 | orchestrator | Wednesday 11 March 2026 01:02:53 +0000 (0:00:03.422) 0:00:18.381 ******* 2026-03-11 01:04:22.396492 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:04:22.396497 | orchestrator | 2026-03-11 01:04:22.396503 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-11 01:04:22.396508 | orchestrator | Wednesday 11 March 2026 01:02:56 +0000 (0:00:03.076) 0:00:21.457 ******* 2026-03-11 01:04:22.396514 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-11 01:04:22.396519 | orchestrator | 2026-03-11 01:04:22.396525 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-11 01:04:22.396530 | orchestrator | Wednesday 11 March 2026 01:02:59 +0000 (0:00:03.174) 0:00:24.631 ******* 2026-03-11 01:04:22.396536 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:22.396541 | orchestrator | 2026-03-11 01:04:22.396553 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-11 01:04:22.396558 | orchestrator | Wednesday 11 March 2026 01:03:03 +0000 (0:00:03.259) 0:00:27.891 ******* 2026-03-11 01:04:22.396564 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:22.396569 | orchestrator | 2026-03-11 01:04:22.396575 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-11 01:04:22.396580 | orchestrator | Wednesday 11 March 2026 01:03:07 +0000 (0:00:04.115) 0:00:32.006 ******* 2026-03-11 01:04:22.396586 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:22.396591 | orchestrator | 2026-03-11 01:04:22.396597 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-11 01:04:22.396602 | orchestrator | Wednesday 11 March 2026 01:03:10 +0000 (0:00:03.357) 0:00:35.364 ******* 2026-03-11 01:04:22.396610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.396623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.396634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.396640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.396652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.396658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.396672 | orchestrator | 2026-03-11 01:04:22.396678 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-11 01:04:22.396684 | orchestrator | Wednesday 11 March 2026 01:03:11 +0000 (0:00:01.210) 0:00:36.574 ******* 2026-03-11 01:04:22.396734 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:22.396741 | orchestrator | 2026-03-11 01:04:22.396746 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-11 01:04:22.396752 | orchestrator | Wednesday 11 March 2026 01:03:11 +0000 (0:00:00.099) 0:00:36.674 ******* 2026-03-11 01:04:22.396757 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:22.396763 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:22.396768 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:22.396774 | orchestrator | 2026-03-11 01:04:22.396779 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-11 01:04:22.396784 | orchestrator | Wednesday 11 March 2026 01:03:12 +0000 (0:00:00.362) 0:00:37.037 ******* 2026-03-11 01:04:22.396790 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:04:22.396795 | orchestrator | 2026-03-11 01:04:22.396800 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-11 01:04:22.396805 | orchestrator | Wednesday 11 March 2026 01:03:13 +0000 (0:00:00.791) 0:00:37.828 ******* 2026-03-11 01:04:22.396815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.396825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.396831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.396842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.396860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.396869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.396875 | orchestrator | 2026-03-11 01:04:22.396880 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-11 01:04:22.396885 | orchestrator | Wednesday 11 March 2026 01:03:15 +0000 (0:00:02.088) 0:00:39.917 ******* 2026-03-11 01:04:22.396891 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:04:22.396896 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:04:22.396902 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:04:22.396907 | orchestrator | 2026-03-11 01:04:22.396912 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-11 01:04:22.396918 | orchestrator | Wednesday 11 March 2026 01:03:15 +0000 (0:00:00.309) 0:00:40.226 ******* 2026-03-11 01:04:22.396923 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:04:22.396929 | orchestrator | 2026-03-11 01:04:22.396934 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-11 01:04:22.396939 | orchestrator | Wednesday 11 March 2026 01:03:16 +0000 (0:00:00.684) 0:00:40.910 ******* 2026-03-11 01:04:22.396949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.396958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.396967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.396973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.396978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.396993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.397001 | orchestrator | 2026-03-11 01:04:22.397007 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-11 01:04:22.397012 | orchestrator | Wednesday 11 March 2026 01:03:18 +0000 (0:00:02.511) 0:00:43.422 ******* 2026-03-11 01:04:22.397018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:04:22.397026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:04:22.397031 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:22.397036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:04:22.397046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:04:22.397054 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:22.397060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:04:22.397064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:04:22.397069 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:22.397074 | orchestrator | 2026-03-11 01:04:22.397080 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-11 01:04:22.397086 | orchestrator | Wednesday 11 March 2026 01:03:19 +0000 (0:00:00.606) 0:00:44.028 ******* 2026-03-11 01:04:22.397094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:04:22.397100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:04:22.397109 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:22.397119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:04:22.397125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:04:22.397130 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:22.397139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:04:22.397148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:04:22.397154 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:22.397191 | orchestrator | 2026-03-11 01:04:22.397197 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-11 01:04:22.397202 | orchestrator | Wednesday 11 March 2026 01:03:20 +0000 (0:00:01.212) 0:00:45.241 ******* 2026-03-11 01:04:22.397217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.397224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.397230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.397239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.397243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.397252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.397255 | orchestrator | 2026-03-11 01:04:22.397260 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-11 01:04:22.397265 | orchestrator | Wednesday 11 March 2026 01:03:22 +0000 (0:00:02.199) 0:00:47.440 ******* 2026-03-11 01:04:22.397270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.397276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.397288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.397297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.397306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.397311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.397316 | orchestrator | 2026-03-11 01:04:22.397321 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-11 01:04:22.397327 | orchestrator | Wednesday 11 March 2026 01:03:27 +0000 (0:00:04.509) 0:00:51.950 ******* 2026-03-11 01:04:22.397332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:04:22.397340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:04:22.397349 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:22.397354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:04:22.397364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:04:22.397370 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:22.397375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:04:22.397380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:04:22.397388 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:22.397394 | orchestrator | 2026-03-11 01:04:22.397399 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-11 01:04:22.397409 | orchestrator | Wednesday 11 March 2026 01:03:27 +0000 (0:00:00.467) 0:00:52.417 ******* 2026-03-11 01:04:22.397419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.397433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.397441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:04:22.397447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.397456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.397469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:04:22.397475 | orchestrator | 2026-03-11 01:04:22.397480 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-11 01:04:22.397486 | orchestrator | Wednesday 11 March 2026 01:03:29 +0000 (0:00:01.849) 0:00:54.267 ******* 2026-03-11 01:04:22.397491 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:22.397495 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:22.397500 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:22.397505 | orchestrator | 2026-03-11 01:04:22.397509 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-11 01:04:22.397514 | orchestrator | Wednesday 11 March 2026 01:03:29 +0000 (0:00:00.269) 0:00:54.536 ******* 2026-03-11 01:04:22.397519 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:22.397524 | orchestrator | 2026-03-11 01:04:22.397529 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-11 01:04:22.397534 | orchestrator | Wednesday 11 March 2026 01:03:31 +0000 (0:00:01.985) 0:00:56.522 ******* 2026-03-11 01:04:22.397539 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:22.397544 | orchestrator | 2026-03-11 01:04:22.397553 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-11 01:04:22.397559 | orchestrator | Wednesday 11 March 2026 01:03:33 +0000 (0:00:02.191) 0:00:58.713 ******* 2026-03-11 01:04:22.397566 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:22.397573 | orchestrator | 2026-03-11 01:04:22.397578 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-11 01:04:22.397583 | orchestrator | Wednesday 11 March 2026 01:03:50 +0000 (0:00:16.323) 0:01:15.036 ******* 2026-03-11 01:04:22.397588 | orchestrator | 2026-03-11 01:04:22.397593 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-11 01:04:22.397598 | orchestrator | Wednesday 11 March 2026 01:03:50 +0000 (0:00:00.076) 0:01:15.113 ******* 2026-03-11 01:04:22.397602 | orchestrator | 2026-03-11 01:04:22.397607 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-11 01:04:22.397611 | orchestrator | Wednesday 11 March 2026 01:03:50 +0000 (0:00:00.067) 0:01:15.181 ******* 2026-03-11 01:04:22.397616 | orchestrator | 2026-03-11 01:04:22.397621 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-11 01:04:22.397626 | orchestrator | Wednesday 11 March 2026 01:03:50 +0000 (0:00:00.075) 0:01:15.256 ******* 2026-03-11 01:04:22.397631 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:22.397636 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:04:22.397641 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:04:22.397645 | orchestrator | 2026-03-11 01:04:22.397650 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-11 01:04:22.397654 | orchestrator | Wednesday 11 March 2026 01:04:05 +0000 (0:00:15.170) 0:01:30.427 ******* 2026-03-11 01:04:22.397659 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:22.397668 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:04:22.397673 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:04:22.397678 | orchestrator | 2026-03-11 01:04:22.397683 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:04:22.397688 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 01:04:22.397695 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 01:04:22.397700 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 01:04:22.397704 | orchestrator | 2026-03-11 01:04:22.397709 | orchestrator | 2026-03-11 01:04:22.397714 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:04:22.397719 | orchestrator | Wednesday 11 March 2026 01:04:19 +0000 (0:00:13.876) 0:01:44.304 ******* 2026-03-11 01:04:22.397724 | orchestrator | =============================================================================== 2026-03-11 01:04:22.397731 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.32s 2026-03-11 01:04:22.397738 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.17s 2026-03-11 01:04:22.397743 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 13.88s 2026-03-11 01:04:22.397748 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.35s 2026-03-11 01:04:22.397752 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.51s 2026-03-11 01:04:22.397757 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.12s 2026-03-11 01:04:22.397766 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.42s 2026-03-11 01:04:22.397771 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.38s 2026-03-11 01:04:22.397776 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.36s 2026-03-11 01:04:22.397783 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.26s 2026-03-11 01:04:22.397789 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.17s 2026-03-11 01:04:22.397794 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.08s 2026-03-11 01:04:22.397799 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.70s 2026-03-11 01:04:22.397803 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.51s 2026-03-11 01:04:22.397808 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.20s 2026-03-11 01:04:22.397813 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.19s 2026-03-11 01:04:22.397820 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.09s 2026-03-11 01:04:22.397827 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.99s 2026-03-11 01:04:22.397831 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.85s 2026-03-11 01:04:22.397836 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.21s 2026-03-11 01:04:22.397842 | orchestrator | 2026-03-11 01:04:22 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:22.398422 | orchestrator | 2026-03-11 01:04:22 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:22.400044 | orchestrator | 2026-03-11 01:04:22 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:04:22.401090 | orchestrator | 2026-03-11 01:04:22 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:04:22.401215 | orchestrator | 2026-03-11 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:25.450916 | orchestrator | 2026-03-11 01:04:25 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:25.452851 | orchestrator | 2026-03-11 01:04:25 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:25.453907 | orchestrator | 2026-03-11 01:04:25 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:04:25.455872 | orchestrator | 2026-03-11 01:04:25 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:04:25.456907 | orchestrator | 2026-03-11 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:28.504651 | orchestrator | 2026-03-11 01:04:28 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:28.506425 | orchestrator | 2026-03-11 01:04:28 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:28.508514 | orchestrator | 2026-03-11 01:04:28 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:04:28.510247 | orchestrator | 2026-03-11 01:04:28 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:04:28.510401 | orchestrator | 2026-03-11 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:31.571213 | orchestrator | 2026-03-11 01:04:31 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:31.571257 | orchestrator | 2026-03-11 01:04:31 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:31.571262 | orchestrator | 2026-03-11 01:04:31 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:04:31.571266 | orchestrator | 2026-03-11 01:04:31 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:04:31.571270 | orchestrator | 2026-03-11 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:34.592771 | orchestrator | 2026-03-11 01:04:34 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:34.595046 | orchestrator | 2026-03-11 01:04:34 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:34.597519 | orchestrator | 2026-03-11 01:04:34 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:04:34.599802 | orchestrator | 2026-03-11 01:04:34 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:04:34.599853 | orchestrator | 2026-03-11 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:37.645115 | orchestrator | 2026-03-11 01:04:37 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:37.648063 | orchestrator | 2026-03-11 01:04:37 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:37.649567 | orchestrator | 2026-03-11 01:04:37 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:04:37.651215 | orchestrator | 2026-03-11 01:04:37 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:04:37.651260 | orchestrator | 2026-03-11 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:40.698444 | orchestrator | 2026-03-11 01:04:40 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:40.699884 | orchestrator | 2026-03-11 01:04:40 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:40.701582 | orchestrator | 2026-03-11 01:04:40 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:04:40.702931 | orchestrator | 2026-03-11 01:04:40 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state STARTED 2026-03-11 01:04:40.702998 | orchestrator | 2026-03-11 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:43.733421 | orchestrator | 2026-03-11 01:04:43 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:43.733477 | orchestrator | 2026-03-11 01:04:43 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:43.733983 | orchestrator | 2026-03-11 01:04:43 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:04:43.735914 | orchestrator | 2026-03-11 01:04:43 | INFO  | Task 0ad2843d-d0f2-434e-ad92-2d28dc6b2581 is in state SUCCESS 2026-03-11 01:04:43.735946 | orchestrator | 2026-03-11 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:43.736941 | orchestrator | 2026-03-11 01:04:43.736974 | orchestrator | 2026-03-11 01:04:43.736979 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:04:43.736984 | orchestrator | 2026-03-11 01:04:43.736988 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:04:43.736992 | orchestrator | Wednesday 11 March 2026 01:00:37 +0000 (0:00:00.293) 0:00:00.293 ******* 2026-03-11 01:04:43.736996 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:04:43.737000 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:04:43.737004 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:04:43.737008 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:04:43.737012 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:04:43.737016 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:04:43.737020 | orchestrator | 2026-03-11 01:04:43.737023 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:04:43.737027 | orchestrator | Wednesday 11 March 2026 01:00:37 +0000 (0:00:00.639) 0:00:00.932 ******* 2026-03-11 01:04:43.737031 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-11 01:04:43.737036 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-11 01:04:43.737039 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-11 01:04:43.737043 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-11 01:04:43.737047 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-11 01:04:43.737051 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-11 01:04:43.737055 | orchestrator | 2026-03-11 01:04:43.737059 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-11 01:04:43.737062 | orchestrator | 2026-03-11 01:04:43.737066 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-11 01:04:43.737070 | orchestrator | Wednesday 11 March 2026 01:00:38 +0000 (0:00:00.637) 0:00:01.570 ******* 2026-03-11 01:04:43.737075 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 01:04:43.737079 | orchestrator | 2026-03-11 01:04:43.737083 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-11 01:04:43.737087 | orchestrator | Wednesday 11 March 2026 01:00:39 +0000 (0:00:00.969) 0:00:02.539 ******* 2026-03-11 01:04:43.737091 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:04:43.737095 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:04:43.737098 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:04:43.737102 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:04:43.737106 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:04:43.737110 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:04:43.737113 | orchestrator | 2026-03-11 01:04:43.737117 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-11 01:04:43.737121 | orchestrator | Wednesday 11 March 2026 01:00:40 +0000 (0:00:01.192) 0:00:03.732 ******* 2026-03-11 01:04:43.737125 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:04:43.737180 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:04:43.737186 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:04:43.737190 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:04:43.737194 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:04:43.737197 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:04:43.737201 | orchestrator | 2026-03-11 01:04:43.737205 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-11 01:04:43.737209 | orchestrator | Wednesday 11 March 2026 01:00:41 +0000 (0:00:01.072) 0:00:04.804 ******* 2026-03-11 01:04:43.737213 | orchestrator | ok: [testbed-node-0] => { 2026-03-11 01:04:43.737218 | orchestrator |  "changed": false, 2026-03-11 01:04:43.737222 | orchestrator |  "msg": "All assertions passed" 2026-03-11 01:04:43.737225 | orchestrator | } 2026-03-11 01:04:43.737229 | orchestrator | ok: [testbed-node-1] => { 2026-03-11 01:04:43.737291 | orchestrator |  "changed": false, 2026-03-11 01:04:43.737439 | orchestrator |  "msg": "All assertions passed" 2026-03-11 01:04:43.737497 | orchestrator | } 2026-03-11 01:04:43.737505 | orchestrator | ok: [testbed-node-2] => { 2026-03-11 01:04:43.737511 | orchestrator |  "changed": false, 2026-03-11 01:04:43.737516 | orchestrator |  "msg": "All assertions passed" 2026-03-11 01:04:43.737522 | orchestrator | } 2026-03-11 01:04:43.737528 | orchestrator | ok: [testbed-node-3] => { 2026-03-11 01:04:43.737534 | orchestrator |  "changed": false, 2026-03-11 01:04:43.737540 | orchestrator |  "msg": "All assertions passed" 2026-03-11 01:04:43.737546 | orchestrator | } 2026-03-11 01:04:43.737553 | orchestrator | ok: [testbed-node-4] => { 2026-03-11 01:04:43.737559 | orchestrator |  "changed": false, 2026-03-11 01:04:43.737565 | orchestrator |  "msg": "All assertions passed" 2026-03-11 01:04:43.737571 | orchestrator | } 2026-03-11 01:04:43.737577 | orchestrator | ok: [testbed-node-5] => { 2026-03-11 01:04:43.737583 | orchestrator |  "changed": false, 2026-03-11 01:04:43.737590 | orchestrator |  "msg": "All assertions passed" 2026-03-11 01:04:43.737594 | orchestrator | } 2026-03-11 01:04:43.737598 | orchestrator | 2026-03-11 01:04:43.737695 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-11 01:04:43.737701 | orchestrator | Wednesday 11 March 2026 01:00:42 +0000 (0:00:00.670) 0:00:05.475 ******* 2026-03-11 01:04:43.737704 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.737708 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.737712 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.737716 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.737720 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.737723 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.737727 | orchestrator | 2026-03-11 01:04:43.737731 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-11 01:04:43.737735 | orchestrator | Wednesday 11 March 2026 01:00:43 +0000 (0:00:00.499) 0:00:05.975 ******* 2026-03-11 01:04:43.737739 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-11 01:04:43.737743 | orchestrator | 2026-03-11 01:04:43.737746 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-11 01:04:43.737750 | orchestrator | Wednesday 11 March 2026 01:00:46 +0000 (0:00:03.463) 0:00:09.438 ******* 2026-03-11 01:04:43.737754 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-11 01:04:43.737758 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-11 01:04:43.737762 | orchestrator | 2026-03-11 01:04:43.737781 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-11 01:04:43.737786 | orchestrator | Wednesday 11 March 2026 01:00:52 +0000 (0:00:06.107) 0:00:15.546 ******* 2026-03-11 01:04:43.737790 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:04:43.737794 | orchestrator | 2026-03-11 01:04:43.737797 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-11 01:04:43.737801 | orchestrator | Wednesday 11 March 2026 01:00:55 +0000 (0:00:03.107) 0:00:18.654 ******* 2026-03-11 01:04:43.737811 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:04:43.737815 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-11 01:04:43.737818 | orchestrator | 2026-03-11 01:04:43.737822 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-11 01:04:43.737826 | orchestrator | Wednesday 11 March 2026 01:00:59 +0000 (0:00:03.912) 0:00:22.566 ******* 2026-03-11 01:04:43.737830 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:04:43.737833 | orchestrator | 2026-03-11 01:04:43.737837 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-11 01:04:43.737841 | orchestrator | Wednesday 11 March 2026 01:01:02 +0000 (0:00:03.267) 0:00:25.833 ******* 2026-03-11 01:04:43.737845 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-11 01:04:43.737848 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-11 01:04:43.737852 | orchestrator | 2026-03-11 01:04:43.737856 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-11 01:04:43.737860 | orchestrator | Wednesday 11 March 2026 01:01:10 +0000 (0:00:07.392) 0:00:33.226 ******* 2026-03-11 01:04:43.737863 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.737867 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.737871 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.737875 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.737878 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.737882 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.737886 | orchestrator | 2026-03-11 01:04:43.737890 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-11 01:04:43.737894 | orchestrator | Wednesday 11 March 2026 01:01:10 +0000 (0:00:00.650) 0:00:33.877 ******* 2026-03-11 01:04:43.737897 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.737901 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.737905 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.737909 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.737912 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.737916 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.737920 | orchestrator | 2026-03-11 01:04:43.737923 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-11 01:04:43.737927 | orchestrator | Wednesday 11 March 2026 01:01:13 +0000 (0:00:02.148) 0:00:36.025 ******* 2026-03-11 01:04:43.737931 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:04:43.737935 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:04:43.737939 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:04:43.737942 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:04:43.737946 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:04:43.737950 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:04:43.737954 | orchestrator | 2026-03-11 01:04:43.737958 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-11 01:04:43.737962 | orchestrator | Wednesday 11 March 2026 01:01:14 +0000 (0:00:00.961) 0:00:36.987 ******* 2026-03-11 01:04:43.737965 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.737969 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.737973 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.737977 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.737980 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.737988 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.737992 | orchestrator | 2026-03-11 01:04:43.737996 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-11 01:04:43.738000 | orchestrator | Wednesday 11 March 2026 01:01:16 +0000 (0:00:02.480) 0:00:39.468 ******* 2026-03-11 01:04:43.738006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.738049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.738057 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:43.738062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.738071 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:43.738081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:43.738085 | orchestrator | 2026-03-11 01:04:43.738089 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-11 01:04:43.738094 | orchestrator | Wednesday 11 March 2026 01:01:19 +0000 (0:00:02.914) 0:00:42.382 ******* 2026-03-11 01:04:43.738100 | orchestrator | [WARNING]: Skipped 2026-03-11 01:04:43.738109 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-11 01:04:43.738117 | orchestrator | due to this access issue: 2026-03-11 01:04:43.738124 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-11 01:04:43.738141 | orchestrator | a directory 2026-03-11 01:04:43.738148 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:04:43.738153 | orchestrator | 2026-03-11 01:04:43.738176 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-11 01:04:43.738183 | orchestrator | Wednesday 11 March 2026 01:01:20 +0000 (0:00:00.674) 0:00:43.057 ******* 2026-03-11 01:04:43.738189 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 01:04:43.738196 | orchestrator | 2026-03-11 01:04:43.738202 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-11 01:04:43.738207 | orchestrator | Wednesday 11 March 2026 01:01:21 +0000 (0:00:01.185) 0:00:44.243 ******* 2026-03-11 01:04:43.738214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.738220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.738234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.738241 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:43.738263 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:43.738270 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:43.738276 | orchestrator | 2026-03-11 01:04:43.738283 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-11 01:04:43.738290 | orchestrator | Wednesday 11 March 2026 01:01:25 +0000 (0:00:04.341) 0:00:48.584 ******* 2026-03-11 01:04:43.738296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.738308 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.738317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.738324 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.738344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.738351 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.738358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.738364 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.738371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.738382 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.738389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.738393 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.738397 | orchestrator | 2026-03-11 01:04:43.738401 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-11 01:04:43.738405 | orchestrator | Wednesday 11 March 2026 01:01:28 +0000 (0:00:02.870) 0:00:51.455 ******* 2026-03-11 01:04:43.738409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.738413 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.738428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.738433 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.738437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.738443 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.738447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.738451 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.738459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.738463 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.738468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.738472 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.738475 | orchestrator | 2026-03-11 01:04:43.738479 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-11 01:04:43.738485 | orchestrator | Wednesday 11 March 2026 01:01:31 +0000 (0:00:03.182) 0:00:54.638 ******* 2026-03-11 01:04:43.738489 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.738493 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.738497 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.738501 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.738504 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.738508 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.738512 | orchestrator | 2026-03-11 01:04:43.738517 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-11 01:04:43.738524 | orchestrator | Wednesday 11 March 2026 01:01:34 +0000 (0:00:02.796) 0:00:57.434 ******* 2026-03-11 01:04:43.738530 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.738542 | orchestrator | 2026-03-11 01:04:43.738549 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-11 01:04:43.738556 | orchestrator | Wednesday 11 March 2026 01:01:34 +0000 (0:00:00.099) 0:00:57.534 ******* 2026-03-11 01:04:43.738562 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.738568 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.738579 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.738585 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.738592 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.738598 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.738605 | orchestrator | 2026-03-11 01:04:43.738612 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-11 01:04:43.738618 | orchestrator | Wednesday 11 March 2026 01:01:35 +0000 (0:00:00.689) 0:00:58.223 ******* 2026-03-11 01:04:43.738626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.738630 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.738652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.738657 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.738661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.738665 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.738674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.738681 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.738685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.738690 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.738693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.738697 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.738701 | orchestrator | 2026-03-11 01:04:43.738705 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-11 01:04:43.738709 | orchestrator | Wednesday 11 March 2026 01:01:37 +0000 (0:00:02.691) 0:01:00.915 ******* 2026-03-11 01:04:43.738715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.738722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.738732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.738739 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:43.738749 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:43.738756 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:43.738762 | orchestrator | 2026-03-11 01:04:43.738768 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-11 01:04:43.738775 | orchestrator | Wednesday 11 March 2026 01:01:41 +0000 (0:00:03.654) 0:01:04.569 ******* 2026-03-11 01:04:43.738784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:43.738795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.738802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.738811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.738819 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:43.738829 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:43.738841 | orchestrator | 2026-03-11 01:04:43.738848 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-11 01:04:43.738855 | orchestrator | Wednesday 11 March 2026 01:01:48 +0000 (0:00:06.958) 0:01:11.528 ******* 2026-03-11 01:04:43.738861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.738878 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.738882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.738886 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.738893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.738897 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.738901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.738908 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.738915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.738920 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.738924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.738928 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.738931 | orchestrator | 2026-03-11 01:04:43.738935 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-11 01:04:43.738940 | orchestrator | Wednesday 11 March 2026 01:01:51 +0000 (0:00:02.737) 0:01:14.265 ******* 2026-03-11 01:04:43.738946 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.738953 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.738959 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.738966 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:43.738972 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:04:43.738979 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:04:43.738985 | orchestrator | 2026-03-11 01:04:43.738992 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-11 01:04:43.738999 | orchestrator | Wednesday 11 March 2026 01:01:54 +0000 (0:00:02.962) 0:01:17.228 ******* 2026-03-11 01:04:43.739009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.739017 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.739034 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.739047 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.739055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.739061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.739070 | orchestrator | 2026-03-11 01:04:43.739074 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-11 01:04:43.739078 | orchestrator | Wednesday 11 March 2026 01:01:57 +0000 (0:00:03.694) 0:01:20.922 ******* 2026-03-11 01:04:43.739082 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739085 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739089 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739093 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739097 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739101 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739104 | orchestrator | 2026-03-11 01:04:43.739108 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-11 01:04:43.739112 | orchestrator | Wednesday 11 March 2026 01:02:00 +0000 (0:00:02.509) 0:01:23.431 ******* 2026-03-11 01:04:43.739116 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739120 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739124 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739139 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739144 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739148 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739152 | orchestrator | 2026-03-11 01:04:43.739156 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-11 01:04:43.739159 | orchestrator | Wednesday 11 March 2026 01:02:02 +0000 (0:00:02.252) 0:01:25.684 ******* 2026-03-11 01:04:43.739167 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739171 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739175 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739178 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739182 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739186 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739190 | orchestrator | 2026-03-11 01:04:43.739194 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-11 01:04:43.739198 | orchestrator | Wednesday 11 March 2026 01:02:05 +0000 (0:00:02.736) 0:01:28.421 ******* 2026-03-11 01:04:43.739202 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739206 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739210 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739213 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739217 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739221 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739225 | orchestrator | 2026-03-11 01:04:43.739229 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-11 01:04:43.739233 | orchestrator | Wednesday 11 March 2026 01:02:07 +0000 (0:00:02.036) 0:01:30.457 ******* 2026-03-11 01:04:43.739236 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739240 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739244 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739248 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739252 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739256 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739263 | orchestrator | 2026-03-11 01:04:43.739271 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-11 01:04:43.739280 | orchestrator | Wednesday 11 March 2026 01:02:09 +0000 (0:00:02.088) 0:01:32.546 ******* 2026-03-11 01:04:43.739286 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739292 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739298 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739309 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739315 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739322 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739328 | orchestrator | 2026-03-11 01:04:43.739334 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-11 01:04:43.739340 | orchestrator | Wednesday 11 March 2026 01:02:11 +0000 (0:00:02.021) 0:01:34.568 ******* 2026-03-11 01:04:43.739347 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-11 01:04:43.739353 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739360 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-11 01:04:43.739366 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739372 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-11 01:04:43.739378 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739382 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-11 01:04:43.739386 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739390 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-11 01:04:43.739394 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739398 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-11 01:04:43.739402 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739406 | orchestrator | 2026-03-11 01:04:43.739410 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-11 01:04:43.739417 | orchestrator | Wednesday 11 March 2026 01:02:13 +0000 (0:00:01.980) 0:01:36.548 ******* 2026-03-11 01:04:43.739421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.739426 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.739438 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.739450 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.739459 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.739469 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.739477 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739481 | orchestrator | 2026-03-11 01:04:43.739485 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-11 01:04:43.739489 | orchestrator | Wednesday 11 March 2026 01:02:16 +0000 (0:00:02.723) 0:01:39.272 ******* 2026-03-11 01:04:43.739496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.739503 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.739511 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.739522 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.739543 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.739563 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.739576 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739583 | orchestrator | 2026-03-11 01:04:43.739589 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-11 01:04:43.739596 | orchestrator | Wednesday 11 March 2026 01:02:18 +0000 (0:00:02.045) 0:01:41.317 ******* 2026-03-11 01:04:43.739603 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739610 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739616 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739622 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739626 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739630 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739633 | orchestrator | 2026-03-11 01:04:43.739637 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-11 01:04:43.739641 | orchestrator | Wednesday 11 March 2026 01:02:20 +0000 (0:00:01.759) 0:01:43.077 ******* 2026-03-11 01:04:43.739645 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739649 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739653 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739657 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:04:43.739661 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:04:43.739664 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:04:43.739668 | orchestrator | 2026-03-11 01:04:43.739672 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-11 01:04:43.739676 | orchestrator | Wednesday 11 March 2026 01:02:23 +0000 (0:00:02.999) 0:01:46.076 ******* 2026-03-11 01:04:43.739680 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739684 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739688 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739692 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739696 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739700 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739704 | orchestrator | 2026-03-11 01:04:43.739708 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-11 01:04:43.739712 | orchestrator | Wednesday 11 March 2026 01:02:26 +0000 (0:00:03.171) 0:01:49.247 ******* 2026-03-11 01:04:43.739716 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739720 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739724 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739728 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739732 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739735 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739739 | orchestrator | 2026-03-11 01:04:43.739746 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-11 01:04:43.739750 | orchestrator | Wednesday 11 March 2026 01:02:29 +0000 (0:00:03.085) 0:01:52.333 ******* 2026-03-11 01:04:43.739754 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739758 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739762 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739769 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739773 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739777 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739781 | orchestrator | 2026-03-11 01:04:43.739785 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-11 01:04:43.739789 | orchestrator | Wednesday 11 March 2026 01:02:31 +0000 (0:00:02.031) 0:01:54.364 ******* 2026-03-11 01:04:43.739793 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739797 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739801 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739805 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739809 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739813 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739817 | orchestrator | 2026-03-11 01:04:43.739821 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-11 01:04:43.739824 | orchestrator | Wednesday 11 March 2026 01:02:33 +0000 (0:00:02.398) 0:01:56.763 ******* 2026-03-11 01:04:43.739828 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739832 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739836 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739840 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739844 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739848 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739852 | orchestrator | 2026-03-11 01:04:43.739855 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-11 01:04:43.739859 | orchestrator | Wednesday 11 March 2026 01:02:35 +0000 (0:00:01.855) 0:01:58.618 ******* 2026-03-11 01:04:43.739863 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739867 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739871 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739875 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739879 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739883 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739887 | orchestrator | 2026-03-11 01:04:43.739890 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-11 01:04:43.739897 | orchestrator | Wednesday 11 March 2026 01:02:38 +0000 (0:00:03.164) 0:02:01.783 ******* 2026-03-11 01:04:43.739901 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739905 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739909 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739913 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739917 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739921 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739925 | orchestrator | 2026-03-11 01:04:43.739928 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-11 01:04:43.739932 | orchestrator | Wednesday 11 March 2026 01:02:40 +0000 (0:00:01.557) 0:02:03.340 ******* 2026-03-11 01:04:43.739936 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-11 01:04:43.739941 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.739945 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-11 01:04:43.739949 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.739953 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-11 01:04:43.739957 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.739961 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-11 01:04:43.739965 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.739969 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-11 01:04:43.739973 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.739982 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-11 01:04:43.739985 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.739989 | orchestrator | 2026-03-11 01:04:43.739993 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-11 01:04:43.739997 | orchestrator | Wednesday 11 March 2026 01:02:42 +0000 (0:00:01.858) 0:02:05.199 ******* 2026-03-11 01:04:43.740001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.740006 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.740013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.740018 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.740029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:43.740037 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.740044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.740055 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.740062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.740069 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.740079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:43.740087 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.740093 | orchestrator | 2026-03-11 01:04:43.740097 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-11 01:04:43.740101 | orchestrator | Wednesday 11 March 2026 01:02:43 +0000 (0:00:01.656) 0:02:06.855 ******* 2026-03-11 01:04:43.740106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.740114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.740121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:43.740126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:43.740148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:43.740153 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:43.740159 | orchestrator | 2026-03-11 01:04:43.740167 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-11 01:04:43.740176 | orchestrator | Wednesday 11 March 2026 01:02:46 +0000 (0:00:02.470) 0:02:09.326 ******* 2026-03-11 01:04:43.740183 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:43.740190 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:43.740196 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:43.740201 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:43.740207 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:43.740218 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:43.740226 | orchestrator | 2026-03-11 01:04:43.740232 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-11 01:04:43.740244 | orchestrator | Wednesday 11 March 2026 01:02:46 +0000 (0:00:00.517) 0:02:09.843 ******* 2026-03-11 01:04:43.740248 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:43.740252 | orchestrator | 2026-03-11 01:04:43.740256 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-11 01:04:43.740260 | orchestrator | Wednesday 11 March 2026 01:02:48 +0000 (0:00:01.867) 0:02:11.710 ******* 2026-03-11 01:04:43.740263 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:43.740268 | orchestrator | 2026-03-11 01:04:43.740272 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-11 01:04:43.740276 | orchestrator | Wednesday 11 March 2026 01:02:50 +0000 (0:00:01.949) 0:02:13.660 ******* 2026-03-11 01:04:43.740280 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:43.740284 | orchestrator | 2026-03-11 01:04:43.740288 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-11 01:04:43.740291 | orchestrator | Wednesday 11 March 2026 01:03:31 +0000 (0:00:41.291) 0:02:54.952 ******* 2026-03-11 01:04:43.740295 | orchestrator | 2026-03-11 01:04:43.740299 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-11 01:04:43.740303 | orchestrator | Wednesday 11 March 2026 01:03:32 +0000 (0:00:00.059) 0:02:55.012 ******* 2026-03-11 01:04:43.740307 | orchestrator | 2026-03-11 01:04:43.740311 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-11 01:04:43.740315 | orchestrator | Wednesday 11 March 2026 01:03:32 +0000 (0:00:00.178) 0:02:55.190 ******* 2026-03-11 01:04:43.740319 | orchestrator | 2026-03-11 01:04:43.740322 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-11 01:04:43.740326 | orchestrator | Wednesday 11 March 2026 01:03:32 +0000 (0:00:00.059) 0:02:55.249 ******* 2026-03-11 01:04:43.740330 | orchestrator | 2026-03-11 01:04:43.740334 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-11 01:04:43.740338 | orchestrator | Wednesday 11 March 2026 01:03:32 +0000 (0:00:00.059) 0:02:55.308 ******* 2026-03-11 01:04:43.740342 | orchestrator | 2026-03-11 01:04:43.740346 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-11 01:04:43.740350 | orchestrator | Wednesday 11 March 2026 01:03:32 +0000 (0:00:00.059) 0:02:55.368 ******* 2026-03-11 01:04:43.740353 | orchestrator | 2026-03-11 01:04:43.740357 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-11 01:04:43.740361 | orchestrator | Wednesday 11 March 2026 01:03:32 +0000 (0:00:00.062) 0:02:55.430 ******* 2026-03-11 01:04:43.740365 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:43.740371 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:04:43.740378 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:04:43.740384 | orchestrator | 2026-03-11 01:04:43.740390 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-11 01:04:43.740396 | orchestrator | Wednesday 11 March 2026 01:03:53 +0000 (0:00:20.784) 0:03:16.215 ******* 2026-03-11 01:04:43.740402 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:04:43.740408 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:04:43.740414 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:04:43.740420 | orchestrator | 2026-03-11 01:04:43.740425 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:04:43.740432 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-11 01:04:43.740442 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-11 01:04:43.740449 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-11 01:04:43.740455 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-11 01:04:43.740465 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-11 01:04:43.740472 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-11 01:04:43.740479 | orchestrator | 2026-03-11 01:04:43.740486 | orchestrator | 2026-03-11 01:04:43.740492 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:04:43.740499 | orchestrator | Wednesday 11 March 2026 01:04:42 +0000 (0:00:49.417) 0:04:05.633 ******* 2026-03-11 01:04:43.740505 | orchestrator | =============================================================================== 2026-03-11 01:04:43.740512 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 49.42s 2026-03-11 01:04:43.740519 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.29s 2026-03-11 01:04:43.740525 | orchestrator | neutron : Restart neutron-server container ----------------------------- 20.78s 2026-03-11 01:04:43.740532 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.39s 2026-03-11 01:04:43.740539 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.96s 2026-03-11 01:04:43.740545 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.11s 2026-03-11 01:04:43.740552 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.34s 2026-03-11 01:04:43.740559 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.91s 2026-03-11 01:04:43.740571 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.69s 2026-03-11 01:04:43.740576 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.65s 2026-03-11 01:04:43.740580 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.46s 2026-03-11 01:04:43.740584 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.27s 2026-03-11 01:04:43.740587 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.18s 2026-03-11 01:04:43.740591 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.17s 2026-03-11 01:04:43.740595 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.16s 2026-03-11 01:04:43.740599 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.11s 2026-03-11 01:04:43.740602 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.09s 2026-03-11 01:04:43.740606 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.00s 2026-03-11 01:04:43.740610 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.96s 2026-03-11 01:04:43.740614 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.91s 2026-03-11 01:04:46.765194 | orchestrator | 2026-03-11 01:04:46 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:46.765258 | orchestrator | 2026-03-11 01:04:46 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:04:46.767292 | orchestrator | 2026-03-11 01:04:46 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:46.767423 | orchestrator | 2026-03-11 01:04:46 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:04:46.767434 | orchestrator | 2026-03-11 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:49.790439 | orchestrator | 2026-03-11 01:04:49 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:49.790650 | orchestrator | 2026-03-11 01:04:49 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:04:49.791342 | orchestrator | 2026-03-11 01:04:49 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:49.791836 | orchestrator | 2026-03-11 01:04:49 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:04:49.791864 | orchestrator | 2026-03-11 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:52.829673 | orchestrator | 2026-03-11 01:04:52 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:52.830087 | orchestrator | 2026-03-11 01:04:52 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:04:52.830580 | orchestrator | 2026-03-11 01:04:52 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:52.831417 | orchestrator | 2026-03-11 01:04:52 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:04:52.831436 | orchestrator | 2026-03-11 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:55.868356 | orchestrator | 2026-03-11 01:04:55 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:55.868679 | orchestrator | 2026-03-11 01:04:55 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:04:55.869188 | orchestrator | 2026-03-11 01:04:55 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:55.869979 | orchestrator | 2026-03-11 01:04:55 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:04:55.870002 | orchestrator | 2026-03-11 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:58.899450 | orchestrator | 2026-03-11 01:04:58 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:04:58.899509 | orchestrator | 2026-03-11 01:04:58 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:04:58.899518 | orchestrator | 2026-03-11 01:04:58 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:04:58.900016 | orchestrator | 2026-03-11 01:04:58 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:04:58.900327 | orchestrator | 2026-03-11 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:01.934674 | orchestrator | 2026-03-11 01:05:01 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:01.934944 | orchestrator | 2026-03-11 01:05:01 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:01.937748 | orchestrator | 2026-03-11 01:05:01 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:01.938466 | orchestrator | 2026-03-11 01:05:01 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:01.938485 | orchestrator | 2026-03-11 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:04.980392 | orchestrator | 2026-03-11 01:05:04 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:04.980450 | orchestrator | 2026-03-11 01:05:04 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:04.980458 | orchestrator | 2026-03-11 01:05:04 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:04.980463 | orchestrator | 2026-03-11 01:05:04 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:04.980469 | orchestrator | 2026-03-11 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:08.009759 | orchestrator | 2026-03-11 01:05:08 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:08.010404 | orchestrator | 2026-03-11 01:05:08 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:08.011740 | orchestrator | 2026-03-11 01:05:08 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:08.018764 | orchestrator | 2026-03-11 01:05:08 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:08.018815 | orchestrator | 2026-03-11 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:11.057585 | orchestrator | 2026-03-11 01:05:11 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:11.057680 | orchestrator | 2026-03-11 01:05:11 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:11.058402 | orchestrator | 2026-03-11 01:05:11 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:11.059911 | orchestrator | 2026-03-11 01:05:11 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:11.059967 | orchestrator | 2026-03-11 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:14.130911 | orchestrator | 2026-03-11 01:05:14 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:14.130990 | orchestrator | 2026-03-11 01:05:14 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:14.131559 | orchestrator | 2026-03-11 01:05:14 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:14.132196 | orchestrator | 2026-03-11 01:05:14 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:14.132368 | orchestrator | 2026-03-11 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:17.167498 | orchestrator | 2026-03-11 01:05:17 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:17.168690 | orchestrator | 2026-03-11 01:05:17 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:17.170537 | orchestrator | 2026-03-11 01:05:17 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:17.171739 | orchestrator | 2026-03-11 01:05:17 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:17.172715 | orchestrator | 2026-03-11 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:20.233425 | orchestrator | 2026-03-11 01:05:20 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:20.234288 | orchestrator | 2026-03-11 01:05:20 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:20.235149 | orchestrator | 2026-03-11 01:05:20 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:20.235870 | orchestrator | 2026-03-11 01:05:20 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:20.235888 | orchestrator | 2026-03-11 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:23.298924 | orchestrator | 2026-03-11 01:05:23 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:23.299054 | orchestrator | 2026-03-11 01:05:23 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:23.299927 | orchestrator | 2026-03-11 01:05:23 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:23.300450 | orchestrator | 2026-03-11 01:05:23 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:23.300481 | orchestrator | 2026-03-11 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:26.331485 | orchestrator | 2026-03-11 01:05:26 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:26.331992 | orchestrator | 2026-03-11 01:05:26 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:26.332029 | orchestrator | 2026-03-11 01:05:26 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:26.332424 | orchestrator | 2026-03-11 01:05:26 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:26.332438 | orchestrator | 2026-03-11 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:29.357955 | orchestrator | 2026-03-11 01:05:29 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:29.358395 | orchestrator | 2026-03-11 01:05:29 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:29.358795 | orchestrator | 2026-03-11 01:05:29 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:29.359444 | orchestrator | 2026-03-11 01:05:29 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:29.359464 | orchestrator | 2026-03-11 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:32.393519 | orchestrator | 2026-03-11 01:05:32 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:32.393721 | orchestrator | 2026-03-11 01:05:32 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:32.394348 | orchestrator | 2026-03-11 01:05:32 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:32.394901 | orchestrator | 2026-03-11 01:05:32 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:32.394973 | orchestrator | 2026-03-11 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:35.428117 | orchestrator | 2026-03-11 01:05:35 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:35.428236 | orchestrator | 2026-03-11 01:05:35 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:35.429991 | orchestrator | 2026-03-11 01:05:35 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:35.432616 | orchestrator | 2026-03-11 01:05:35 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:35.432690 | orchestrator | 2026-03-11 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:38.456620 | orchestrator | 2026-03-11 01:05:38 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:38.458553 | orchestrator | 2026-03-11 01:05:38 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:38.459759 | orchestrator | 2026-03-11 01:05:38 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:38.460674 | orchestrator | 2026-03-11 01:05:38 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:38.460701 | orchestrator | 2026-03-11 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:41.486344 | orchestrator | 2026-03-11 01:05:41 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:41.486400 | orchestrator | 2026-03-11 01:05:41 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:41.486894 | orchestrator | 2026-03-11 01:05:41 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:41.487531 | orchestrator | 2026-03-11 01:05:41 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:41.487563 | orchestrator | 2026-03-11 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:44.512727 | orchestrator | 2026-03-11 01:05:44 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:44.512787 | orchestrator | 2026-03-11 01:05:44 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:44.513121 | orchestrator | 2026-03-11 01:05:44 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:44.513732 | orchestrator | 2026-03-11 01:05:44 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:44.513764 | orchestrator | 2026-03-11 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:47.544549 | orchestrator | 2026-03-11 01:05:47 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:47.544593 | orchestrator | 2026-03-11 01:05:47 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:47.544598 | orchestrator | 2026-03-11 01:05:47 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:47.544601 | orchestrator | 2026-03-11 01:05:47 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:47.544605 | orchestrator | 2026-03-11 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:50.582440 | orchestrator | 2026-03-11 01:05:50 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:50.582982 | orchestrator | 2026-03-11 01:05:50 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:50.584168 | orchestrator | 2026-03-11 01:05:50 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:50.586227 | orchestrator | 2026-03-11 01:05:50 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:50.586264 | orchestrator | 2026-03-11 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:53.622135 | orchestrator | 2026-03-11 01:05:53 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:53.623165 | orchestrator | 2026-03-11 01:05:53 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:53.624709 | orchestrator | 2026-03-11 01:05:53 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:53.625976 | orchestrator | 2026-03-11 01:05:53 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:53.626042 | orchestrator | 2026-03-11 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:56.675083 | orchestrator | 2026-03-11 01:05:56 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:56.676656 | orchestrator | 2026-03-11 01:05:56 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:56.679279 | orchestrator | 2026-03-11 01:05:56 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:56.679724 | orchestrator | 2026-03-11 01:05:56 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:56.680260 | orchestrator | 2026-03-11 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:59.722174 | orchestrator | 2026-03-11 01:05:59 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:05:59.723692 | orchestrator | 2026-03-11 01:05:59 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:05:59.724739 | orchestrator | 2026-03-11 01:05:59 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:05:59.725962 | orchestrator | 2026-03-11 01:05:59 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:05:59.726217 | orchestrator | 2026-03-11 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:02.766264 | orchestrator | 2026-03-11 01:06:02 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:02.766862 | orchestrator | 2026-03-11 01:06:02 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:02.767489 | orchestrator | 2026-03-11 01:06:02 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:06:02.768388 | orchestrator | 2026-03-11 01:06:02 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:02.768714 | orchestrator | 2026-03-11 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:05.835701 | orchestrator | 2026-03-11 01:06:05 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:05.836450 | orchestrator | 2026-03-11 01:06:05 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:05.837048 | orchestrator | 2026-03-11 01:06:05 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:06:05.837926 | orchestrator | 2026-03-11 01:06:05 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:05.837958 | orchestrator | 2026-03-11 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:08.877552 | orchestrator | 2026-03-11 01:06:08 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:08.877607 | orchestrator | 2026-03-11 01:06:08 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:08.877613 | orchestrator | 2026-03-11 01:06:08 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:06:08.877617 | orchestrator | 2026-03-11 01:06:08 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:08.877621 | orchestrator | 2026-03-11 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:11.910408 | orchestrator | 2026-03-11 01:06:11 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:11.911904 | orchestrator | 2026-03-11 01:06:11 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:11.913409 | orchestrator | 2026-03-11 01:06:11 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:06:11.914956 | orchestrator | 2026-03-11 01:06:11 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:11.915034 | orchestrator | 2026-03-11 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:14.957223 | orchestrator | 2026-03-11 01:06:14 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:14.960361 | orchestrator | 2026-03-11 01:06:14 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:14.962195 | orchestrator | 2026-03-11 01:06:14 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:06:14.963868 | orchestrator | 2026-03-11 01:06:14 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:14.963900 | orchestrator | 2026-03-11 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:18.009899 | orchestrator | 2026-03-11 01:06:18 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:18.012147 | orchestrator | 2026-03-11 01:06:18 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:18.012201 | orchestrator | 2026-03-11 01:06:18 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:06:18.013160 | orchestrator | 2026-03-11 01:06:18 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:18.013189 | orchestrator | 2026-03-11 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:21.044301 | orchestrator | 2026-03-11 01:06:21 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:21.045247 | orchestrator | 2026-03-11 01:06:21 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:21.046773 | orchestrator | 2026-03-11 01:06:21 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:06:21.049569 | orchestrator | 2026-03-11 01:06:21 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:21.049626 | orchestrator | 2026-03-11 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:24.082236 | orchestrator | 2026-03-11 01:06:24 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:24.084483 | orchestrator | 2026-03-11 01:06:24 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:24.086569 | orchestrator | 2026-03-11 01:06:24 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:06:24.088583 | orchestrator | 2026-03-11 01:06:24 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:24.088640 | orchestrator | 2026-03-11 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:27.124569 | orchestrator | 2026-03-11 01:06:27 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:27.125856 | orchestrator | 2026-03-11 01:06:27 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:27.127397 | orchestrator | 2026-03-11 01:06:27 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:06:27.129788 | orchestrator | 2026-03-11 01:06:27 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:27.129837 | orchestrator | 2026-03-11 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:30.177146 | orchestrator | 2026-03-11 01:06:30 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:30.177518 | orchestrator | 2026-03-11 01:06:30 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:30.179934 | orchestrator | 2026-03-11 01:06:30 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:06:30.182246 | orchestrator | 2026-03-11 01:06:30 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:30.182287 | orchestrator | 2026-03-11 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:33.216234 | orchestrator | 2026-03-11 01:06:33 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:33.220542 | orchestrator | 2026-03-11 01:06:33 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:33.222094 | orchestrator | 2026-03-11 01:06:33 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:06:33.224459 | orchestrator | 2026-03-11 01:06:33 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:33.224765 | orchestrator | 2026-03-11 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:36.267667 | orchestrator | 2026-03-11 01:06:36 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:36.268743 | orchestrator | 2026-03-11 01:06:36 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:36.269847 | orchestrator | 2026-03-11 01:06:36 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:06:36.271764 | orchestrator | 2026-03-11 01:06:36 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:36.271808 | orchestrator | 2026-03-11 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:39.310128 | orchestrator | 2026-03-11 01:06:39 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:39.311141 | orchestrator | 2026-03-11 01:06:39 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:39.311811 | orchestrator | 2026-03-11 01:06:39 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:06:39.312514 | orchestrator | 2026-03-11 01:06:39 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:39.312660 | orchestrator | 2026-03-11 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:42.335133 | orchestrator | 2026-03-11 01:06:42 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:42.335488 | orchestrator | 2026-03-11 01:06:42 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:42.336094 | orchestrator | 2026-03-11 01:06:42 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state STARTED 2026-03-11 01:06:42.336775 | orchestrator | 2026-03-11 01:06:42 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:42.336794 | orchestrator | 2026-03-11 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:45.364927 | orchestrator | 2026-03-11 01:06:45 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:45.366158 | orchestrator | 2026-03-11 01:06:45 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:45.368356 | orchestrator | 2026-03-11 01:06:45.368413 | orchestrator | 2026-03-11 01:06:45 | INFO  | Task 889c7a4b-ec91-4687-9622-d93fa4658ebd is in state SUCCESS 2026-03-11 01:06:45.369444 | orchestrator | 2026-03-11 01:06:45.369482 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:06:45.369488 | orchestrator | 2026-03-11 01:06:45.369493 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:06:45.369499 | orchestrator | Wednesday 11 March 2026 01:03:38 +0000 (0:00:00.250) 0:00:00.250 ******* 2026-03-11 01:06:45.369505 | orchestrator | ok: [testbed-manager] 2026-03-11 01:06:45.369511 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:06:45.369516 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:06:45.369522 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:06:45.369527 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:06:45.369533 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:06:45.369538 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:06:45.369544 | orchestrator | 2026-03-11 01:06:45.369550 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:06:45.369556 | orchestrator | Wednesday 11 March 2026 01:03:39 +0000 (0:00:00.730) 0:00:00.980 ******* 2026-03-11 01:06:45.369562 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-11 01:06:45.369569 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-11 01:06:45.369587 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-11 01:06:45.369592 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-11 01:06:45.369598 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-11 01:06:45.369603 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-11 01:06:45.369608 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-11 01:06:45.369614 | orchestrator | 2026-03-11 01:06:45.369619 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-11 01:06:45.369624 | orchestrator | 2026-03-11 01:06:45.369629 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-11 01:06:45.369633 | orchestrator | Wednesday 11 March 2026 01:03:40 +0000 (0:00:00.728) 0:00:01.709 ******* 2026-03-11 01:06:45.369639 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 01:06:45.369646 | orchestrator | 2026-03-11 01:06:45.369651 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-11 01:06:45.369674 | orchestrator | Wednesday 11 March 2026 01:03:41 +0000 (0:00:01.262) 0:00:02.971 ******* 2026-03-11 01:06:45.369682 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-11 01:06:45.369691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.369706 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.369712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.369730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.369741 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.369792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.369799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.369804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.369808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.369872 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-11 01:06:45.369892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.369898 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.369903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.369923 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.369929 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.369935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.369999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.370036 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.370062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370068 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370073 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.370104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.370116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.370122 | orchestrator | 2026-03-11 01:06:45.370128 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-11 01:06:45.370133 | orchestrator | Wednesday 11 March 2026 01:03:44 +0000 (0:00:02.846) 0:00:05.818 ******* 2026-03-11 01:06:45.370136 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 01:06:45.370140 | orchestrator | 2026-03-11 01:06:45.370145 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-11 01:06:45.370150 | orchestrator | Wednesday 11 March 2026 01:03:45 +0000 (0:00:01.389) 0:00:07.208 ******* 2026-03-11 01:06:45.370155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.370162 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-11 01:06:45.370170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.370187 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.370193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.370219 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.370226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.370233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.370239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.370245 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.370252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.370263 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370266 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.370274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.370277 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370288 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370292 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.370302 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370310 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-11 01:06:45.370314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.370328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.370331 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.370335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.370338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.370341 | orchestrator | 2026-03-11 01:06:45.370345 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-11 01:06:45.370348 | orchestrator | Wednesday 11 March 2026 01:03:51 +0000 (0:00:05.765) 0:00:12.974 ******* 2026-03-11 01:06:45.370352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:06:45.370355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370376 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:45.370380 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-11 01:06:45.370383 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:06:45.370387 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370395 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-11 01:06:45.370398 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:06:45.370408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:06:45.370456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370473 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:06:45.370477 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:45.370480 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:45.370484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:06:45.370487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370497 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:06:45.370500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:06:45.370505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370515 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:06:45.370518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:06:45.370522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370532 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:06:45.370535 | orchestrator | 2026-03-11 01:06:45.370538 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-11 01:06:45.370542 | orchestrator | Wednesday 11 March 2026 01:03:53 +0000 (0:00:02.188) 0:00:15.162 ******* 2026-03-11 01:06:45.370545 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-11 01:06:45.370551 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:06:45.370598 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370604 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-11 01:06:45.370608 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370615 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:06:45.370619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:06:45.370625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370640 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:06:45.370648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:06:45.370674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370704 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:45.370710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:06:45.370715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370746 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:45.370752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:06:45.370762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.370776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:06:45.370779 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:45.370785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:06:45.370788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:06:45.371188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.371235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.371239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.371243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 01:06:45.371247 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:06:45.371250 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:06:45.371254 | orchestrator | 2026-03-11 01:06:45.371257 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-11 01:06:45.371261 | orchestrator | Wednesday 11 March 2026 01:03:56 +0000 (0:00:02.285) 0:00:17.447 ******* 2026-03-11 01:06:45.371264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.371271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.371275 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.371283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.371289 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-11 01:06:45.371293 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.371297 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.371300 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.371305 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.371309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.371314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.371321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.371324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.371328 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.371331 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.371334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.371340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.371343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.371352 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.371358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.371363 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.371384 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-11 01:06:45.371391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.371399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.371408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.371417 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.371423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.371429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.371435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.371441 | orchestrator | 2026-03-11 01:06:45.371447 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-11 01:06:45.371452 | orchestrator | Wednesday 11 March 2026 01:04:02 +0000 (0:00:06.071) 0:00:23.519 ******* 2026-03-11 01:06:45.371458 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 01:06:45.371465 | orchestrator | 2026-03-11 01:06:45.371470 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-11 01:06:45.371476 | orchestrator | Wednesday 11 March 2026 01:04:03 +0000 (0:00:01.132) 0:00:24.651 ******* 2026-03-11 01:06:45.371480 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088957, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8232355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371486 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088957, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8232355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371496 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088957, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8232355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371500 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088957, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8232355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371503 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088957, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8232355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371506 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088957, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8232355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371510 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088967, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8278446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371516 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088967, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8278446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371529 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088967, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8278446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371538 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088955, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8222356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371544 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088955, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8222356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371550 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088957, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8232355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.371554 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088967, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8278446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371557 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088967, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8278446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371561 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088967, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8278446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371569 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088963, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8270292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371575 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088963, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8270292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371579 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088955, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8222356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371582 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088953, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.821871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371585 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088955, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8222356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371589 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088955, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8222356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371592 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088955, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8222356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371599 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088958, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8239942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371604 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088967, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8278446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.371608 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088963, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8270292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371611 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088953, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.821871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371615 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088963, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8270292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371618 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088963, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8270292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371621 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088963, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8270292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371629 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088962, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8268445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371634 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088953, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.821871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371640 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088958, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8239942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371645 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088958, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8239942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371681 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088959, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8242354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371689 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088953, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.821871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371698 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088953, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.821871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371707 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088962, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8268445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371712 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088956, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8222356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371721 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088953, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.821871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371727 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088955, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8222356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.371733 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088962, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8268445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371738 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088958, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8239942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371747 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088958, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8239942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371756 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088966, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8276935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371762 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088958, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8239942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371770 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088959, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8242354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371791 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088959, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8242354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371795 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088962, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8268445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371799 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088951, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8215034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371805 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088962, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8268445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371811 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088956, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8222356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371815 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088956, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8222356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371822 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088963, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8270292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.371826 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088959, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8242354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371830 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088976, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8295875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371836 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088962, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8268445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371840 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088959, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8242354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371848 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088966, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8276935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371854 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088965, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8275354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371865 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088966, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8276935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371873 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088956, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8222356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371878 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088959, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8242354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371896 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088954, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8220074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371902 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088951, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8215034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371911 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088951, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8215034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371916 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088952, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.821709, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371923 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088966, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8276935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371927 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088953, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.821871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.371931 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088956, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8222356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371964 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088956, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8222356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371969 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088951, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8215034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371975 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088976, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8295875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371979 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088976, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8295875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371987 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088961, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.826697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371992 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088976, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8295875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.371998 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088966, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8276935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372011 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088965, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8275354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372017 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088966, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8276935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372026 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088965, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8275354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372031 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088960, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8256476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372279 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088951, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8215034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372294 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088954, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8220074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372309 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088958, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8239942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.372315 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088974, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8295875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372320 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:06:45.372325 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088952, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.821709, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372333 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088965, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8275354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372336 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088951, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8215034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372343 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088976, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8295875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372346 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088954, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8220074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372353 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088965, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8275354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372356 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088954, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8220074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372360 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088961, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.826697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372365 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088976, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8295875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372368 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088952, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.821709, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372375 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088965, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8275354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372380 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088954, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8220074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372391 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088960, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8256476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372397 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088952, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.821709, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372403 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088961, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.826697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372412 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088952, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.821709, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372416 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088960, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8256476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372425 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088962, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8268445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.372437 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088961, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.826697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372443 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088954, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8220074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372449 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088974, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8295875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372454 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088974, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8295875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372459 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:06:45.372465 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:06:45.372473 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088961, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.826697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372478 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088952, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.821709, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372488 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088960, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8256476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372500 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088960, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8256476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372506 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088974, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8295875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372512 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:45.372516 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088974, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8295875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372519 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:45.372523 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088961, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.826697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372528 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088959, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8242354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.372532 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088960, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8256476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372539 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088974, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8295875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:06:45.372543 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:45.372546 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088956, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8222356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.372549 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088966, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8276935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.372553 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088951, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8215034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.372556 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088976, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8295875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.372561 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088965, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8275354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.372564 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088954, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8220074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.372573 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088952, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.821709, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.372576 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088961, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.826697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.372579 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088960, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8256476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.372583 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088974, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8295875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:06:45.372586 | orchestrator | 2026-03-11 01:06:45.372590 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-11 01:06:45.372593 | orchestrator | Wednesday 11 March 2026 01:04:28 +0000 (0:00:25.185) 0:00:49.836 ******* 2026-03-11 01:06:45.372596 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 01:06:45.372600 | orchestrator | 2026-03-11 01:06:45.372603 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-11 01:06:45.372606 | orchestrator | Wednesday 11 March 2026 01:04:29 +0000 (0:00:00.714) 0:00:50.551 ******* 2026-03-11 01:06:45.372609 | orchestrator | [WARNING]: Skipped 2026-03-11 01:06:45.372613 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:06:45.372617 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-11 01:06:45.372620 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:06:45.372623 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-11 01:06:45.372627 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 01:06:45.372630 | orchestrator | [WARNING]: Skipped 2026-03-11 01:06:45.372633 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:06:45.372636 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-11 01:06:45.372643 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:06:45.372646 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-11 01:06:45.372652 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:06:45.372655 | orchestrator | [WARNING]: Skipped 2026-03-11 01:06:45.372658 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:06:45.372662 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-11 01:06:45.372665 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:06:45.372668 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-11 01:06:45.372672 | orchestrator | [WARNING]: Skipped 2026-03-11 01:06:45.372675 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:06:45.372678 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-11 01:06:45.372681 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:06:45.372685 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-11 01:06:45.372688 | orchestrator | [WARNING]: Skipped 2026-03-11 01:06:45.372691 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:06:45.372696 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-11 01:06:45.372700 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:06:45.372703 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-11 01:06:45.372706 | orchestrator | [WARNING]: Skipped 2026-03-11 01:06:45.372710 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:06:45.372713 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-11 01:06:45.372716 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:06:45.372720 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-11 01:06:45.372723 | orchestrator | [WARNING]: Skipped 2026-03-11 01:06:45.372726 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:06:45.372729 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-11 01:06:45.372733 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:06:45.372736 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-11 01:06:45.372739 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-11 01:06:45.372742 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-11 01:06:45.372746 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-11 01:06:45.372749 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-11 01:06:45.372752 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-11 01:06:45.372755 | orchestrator | 2026-03-11 01:06:45.372758 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-11 01:06:45.372762 | orchestrator | Wednesday 11 March 2026 01:04:31 +0000 (0:00:01.766) 0:00:52.318 ******* 2026-03-11 01:06:45.372765 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-11 01:06:45.372768 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:45.372772 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-11 01:06:45.372775 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:45.372778 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-11 01:06:45.372781 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:45.372785 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-11 01:06:45.372788 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:06:45.372791 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-11 01:06:45.372797 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:06:45.372800 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-11 01:06:45.372804 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:06:45.372807 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-11 01:06:45.372810 | orchestrator | 2026-03-11 01:06:45.372813 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-11 01:06:45.372817 | orchestrator | Wednesday 11 March 2026 01:04:44 +0000 (0:00:13.537) 0:01:05.855 ******* 2026-03-11 01:06:45.372820 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-11 01:06:45.372823 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:45.372826 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-11 01:06:45.372830 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:45.372833 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-11 01:06:45.372836 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:06:45.372839 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-11 01:06:45.372843 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:06:45.372847 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-11 01:06:45.372852 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:45.372857 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-11 01:06:45.372862 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:06:45.372871 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-11 01:06:45.372877 | orchestrator | 2026-03-11 01:06:45.372885 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-11 01:06:45.372891 | orchestrator | Wednesday 11 March 2026 01:04:47 +0000 (0:00:02.767) 0:01:08.623 ******* 2026-03-11 01:06:45.372897 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-11 01:06:45.372903 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:45.372909 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-11 01:06:45.372914 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:45.372920 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-11 01:06:45.372926 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-11 01:06:45.372935 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:06:45.372941 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:45.373018 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-11 01:06:45.373028 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-11 01:06:45.373033 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:06:45.373039 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-11 01:06:45.373045 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:06:45.373051 | orchestrator | 2026-03-11 01:06:45.373057 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-11 01:06:45.373068 | orchestrator | Wednesday 11 March 2026 01:04:49 +0000 (0:00:02.320) 0:01:10.944 ******* 2026-03-11 01:06:45.373074 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 01:06:45.373079 | orchestrator | 2026-03-11 01:06:45.373084 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-11 01:06:45.373090 | orchestrator | Wednesday 11 March 2026 01:04:50 +0000 (0:00:00.575) 0:01:11.519 ******* 2026-03-11 01:06:45.373095 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:06:45.373100 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:45.373106 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:45.373111 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:45.373117 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:06:45.373122 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:06:45.373128 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:06:45.373134 | orchestrator | 2026-03-11 01:06:45.373140 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-11 01:06:45.373146 | orchestrator | Wednesday 11 March 2026 01:04:50 +0000 (0:00:00.722) 0:01:12.242 ******* 2026-03-11 01:06:45.373152 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:06:45.373157 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:06:45.373163 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:06:45.373168 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:06:45.373174 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:06:45.373179 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:45.373185 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:06:45.373191 | orchestrator | 2026-03-11 01:06:45.373197 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-11 01:06:45.373202 | orchestrator | Wednesday 11 March 2026 01:04:53 +0000 (0:00:02.754) 0:01:14.997 ******* 2026-03-11 01:06:45.373208 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-11 01:06:45.373214 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:06:45.373219 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-11 01:06:45.373225 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-11 01:06:45.373231 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:45.373237 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:45.373243 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-11 01:06:45.373249 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:45.373255 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-11 01:06:45.373260 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:06:45.373266 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-11 01:06:45.373272 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:06:45.373278 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-11 01:06:45.373284 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:06:45.373289 | orchestrator | 2026-03-11 01:06:45.373293 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-11 01:06:45.373297 | orchestrator | Wednesday 11 March 2026 01:04:55 +0000 (0:00:01.764) 0:01:16.761 ******* 2026-03-11 01:06:45.373303 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-11 01:06:45.373309 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-11 01:06:45.373314 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:45.373319 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:45.373329 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-11 01:06:45.373336 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:45.373345 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-11 01:06:45.373351 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:06:45.373356 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-11 01:06:45.373362 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:06:45.373367 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-11 01:06:45.373371 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-11 01:06:45.373376 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:06:45.373382 | orchestrator | 2026-03-11 01:06:45.373387 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-11 01:06:45.373397 | orchestrator | Wednesday 11 March 2026 01:04:57 +0000 (0:00:02.138) 0:01:18.900 ******* 2026-03-11 01:06:45.373402 | orchestrator | [WARNING]: Skipped 2026-03-11 01:06:45.373407 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-11 01:06:45.373412 | orchestrator | due to this access issue: 2026-03-11 01:06:45.373418 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-11 01:06:45.373423 | orchestrator | not a directory 2026-03-11 01:06:45.373428 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 01:06:45.373433 | orchestrator | 2026-03-11 01:06:45.373439 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-11 01:06:45.373443 | orchestrator | Wednesday 11 March 2026 01:04:58 +0000 (0:00:01.206) 0:01:20.106 ******* 2026-03-11 01:06:45.373449 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:06:45.373454 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:45.373459 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:45.373464 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:45.373470 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:06:45.373475 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:06:45.373481 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:06:45.373486 | orchestrator | 2026-03-11 01:06:45.373492 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-11 01:06:45.373497 | orchestrator | Wednesday 11 March 2026 01:05:00 +0000 (0:00:01.244) 0:01:21.351 ******* 2026-03-11 01:06:45.373503 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:06:45.373509 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:45.373514 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:45.373519 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:45.373525 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:06:45.373530 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:06:45.373535 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:06:45.373541 | orchestrator | 2026-03-11 01:06:45.373546 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-11 01:06:45.373552 | orchestrator | Wednesday 11 March 2026 01:05:00 +0000 (0:00:00.807) 0:01:22.159 ******* 2026-03-11 01:06:45.373558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.373567 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-11 01:06:45.373583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.373591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.373601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.373608 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.373613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.373618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.373623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.373632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.373637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.373648 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.373658 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:06:45.373663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.373669 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.373674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.373683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.373689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.373697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.373702 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.373712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.373717 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.373723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.373728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.373739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:06:45.373752 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-11 01:06:45.373762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.373768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.373772 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:06:45.373777 | orchestrator | 2026-03-11 01:06:45.373782 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-11 01:06:45.373787 | orchestrator | Wednesday 11 March 2026 01:05:05 +0000 (0:00:04.619) 0:01:26.779 ******* 2026-03-11 01:06:45.373797 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-11 01:06:45.373803 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:06:45.373809 | orchestrator | 2026-03-11 01:06:45.373814 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-11 01:06:45.373819 | orchestrator | Wednesday 11 March 2026 01:05:06 +0000 (0:00:01.321) 0:01:28.100 ******* 2026-03-11 01:06:45.373825 | orchestrator | 2026-03-11 01:06:45.373830 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-11 01:06:45.373839 | orchestrator | Wednesday 11 March 2026 01:05:06 +0000 (0:00:00.053) 0:01:28.153 ******* 2026-03-11 01:06:45.373845 | orchestrator | 2026-03-11 01:06:45.373850 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-11 01:06:45.373855 | orchestrator | Wednesday 11 March 2026 01:05:06 +0000 (0:00:00.052) 0:01:28.206 ******* 2026-03-11 01:06:45.373860 | orchestrator | 2026-03-11 01:06:45.373865 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-11 01:06:45.373870 | orchestrator | Wednesday 11 March 2026 01:05:07 +0000 (0:00:00.080) 0:01:28.287 ******* 2026-03-11 01:06:45.373876 | orchestrator | 2026-03-11 01:06:45.373881 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-11 01:06:45.373887 | orchestrator | Wednesday 11 March 2026 01:05:07 +0000 (0:00:00.243) 0:01:28.531 ******* 2026-03-11 01:06:45.373892 | orchestrator | 2026-03-11 01:06:45.373898 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-11 01:06:45.373903 | orchestrator | Wednesday 11 March 2026 01:05:07 +0000 (0:00:00.078) 0:01:28.609 ******* 2026-03-11 01:06:45.373908 | orchestrator | 2026-03-11 01:06:45.373911 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-11 01:06:45.373915 | orchestrator | Wednesday 11 March 2026 01:05:07 +0000 (0:00:00.110) 0:01:28.719 ******* 2026-03-11 01:06:45.373918 | orchestrator | 2026-03-11 01:06:45.373921 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-11 01:06:45.373925 | orchestrator | Wednesday 11 March 2026 01:05:07 +0000 (0:00:00.168) 0:01:28.888 ******* 2026-03-11 01:06:45.373928 | orchestrator | changed: [testbed-manager] 2026-03-11 01:06:45.373931 | orchestrator | 2026-03-11 01:06:45.373934 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-11 01:06:45.373938 | orchestrator | Wednesday 11 March 2026 01:05:26 +0000 (0:00:18.959) 0:01:47.847 ******* 2026-03-11 01:06:45.373941 | orchestrator | changed: [testbed-manager] 2026-03-11 01:06:45.373944 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:06:45.373963 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:06:45.373967 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:06:45.373970 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:06:45.373973 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:45.373977 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:06:45.373980 | orchestrator | 2026-03-11 01:06:45.373983 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-11 01:06:45.373987 | orchestrator | Wednesday 11 March 2026 01:05:40 +0000 (0:00:13.988) 0:02:01.836 ******* 2026-03-11 01:06:45.373990 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:06:45.373994 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:45.373997 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:06:45.374000 | orchestrator | 2026-03-11 01:06:45.374003 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-11 01:06:45.374007 | orchestrator | Wednesday 11 March 2026 01:05:50 +0000 (0:00:10.014) 0:02:11.850 ******* 2026-03-11 01:06:45.374010 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:45.374038 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:06:45.374044 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:06:45.374049 | orchestrator | 2026-03-11 01:06:45.374055 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-11 01:06:45.374060 | orchestrator | Wednesday 11 March 2026 01:06:01 +0000 (0:00:10.886) 0:02:22.737 ******* 2026-03-11 01:06:45.374071 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:45.374077 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:06:45.374081 | orchestrator | changed: [testbed-manager] 2026-03-11 01:06:45.374087 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:06:45.374092 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:06:45.374104 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:06:45.374109 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:06:45.374113 | orchestrator | 2026-03-11 01:06:45.374116 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-11 01:06:45.374139 | orchestrator | Wednesday 11 March 2026 01:06:11 +0000 (0:00:09.763) 0:02:32.501 ******* 2026-03-11 01:06:45.374144 | orchestrator | changed: [testbed-manager] 2026-03-11 01:06:45.374147 | orchestrator | 2026-03-11 01:06:45.374151 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-11 01:06:45.374156 | orchestrator | Wednesday 11 March 2026 01:06:18 +0000 (0:00:07.271) 0:02:39.773 ******* 2026-03-11 01:06:45.374161 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:45.374166 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:06:45.374171 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:06:45.374177 | orchestrator | 2026-03-11 01:06:45.374182 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-11 01:06:45.374187 | orchestrator | Wednesday 11 March 2026 01:06:29 +0000 (0:00:10.722) 0:02:50.495 ******* 2026-03-11 01:06:45.374193 | orchestrator | changed: [testbed-manager] 2026-03-11 01:06:45.374198 | orchestrator | 2026-03-11 01:06:45.374206 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-11 01:06:45.374212 | orchestrator | Wednesday 11 March 2026 01:06:34 +0000 (0:00:05.264) 0:02:55.760 ******* 2026-03-11 01:06:45.374217 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:06:45.374223 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:06:45.374228 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:06:45.374233 | orchestrator | 2026-03-11 01:06:45.374239 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:06:45.374245 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-11 01:06:45.374251 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-11 01:06:45.374258 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-11 01:06:45.374263 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-11 01:06:45.374269 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-11 01:06:45.374274 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-11 01:06:45.374278 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-11 01:06:45.374281 | orchestrator | 2026-03-11 01:06:45.374284 | orchestrator | 2026-03-11 01:06:45.374288 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:06:45.374291 | orchestrator | Wednesday 11 March 2026 01:06:41 +0000 (0:00:07.486) 0:03:03.246 ******* 2026-03-11 01:06:45.374295 | orchestrator | =============================================================================== 2026-03-11 01:06:45.374298 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.19s 2026-03-11 01:06:45.374301 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.96s 2026-03-11 01:06:45.374309 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.99s 2026-03-11 01:06:45.374312 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.54s 2026-03-11 01:06:45.374316 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.89s 2026-03-11 01:06:45.374321 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.72s 2026-03-11 01:06:45.374326 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.01s 2026-03-11 01:06:45.374334 | orchestrator | prometheus : Restart prometheus-cadvisor container ---------------------- 9.76s 2026-03-11 01:06:45.374341 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 7.49s 2026-03-11 01:06:45.374346 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.27s 2026-03-11 01:06:45.374354 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.07s 2026-03-11 01:06:45.374359 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.77s 2026-03-11 01:06:45.374365 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.26s 2026-03-11 01:06:45.374371 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.62s 2026-03-11 01:06:45.374376 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.85s 2026-03-11 01:06:45.374382 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.77s 2026-03-11 01:06:45.374385 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.75s 2026-03-11 01:06:45.374388 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.32s 2026-03-11 01:06:45.374392 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.29s 2026-03-11 01:06:45.374395 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.19s 2026-03-11 01:06:45.374401 | orchestrator | 2026-03-11 01:06:45 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:06:45.374405 | orchestrator | 2026-03-11 01:06:45 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:45.374408 | orchestrator | 2026-03-11 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:48.395246 | orchestrator | 2026-03-11 01:06:48 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:48.395589 | orchestrator | 2026-03-11 01:06:48 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:48.396220 | orchestrator | 2026-03-11 01:06:48 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:06:48.397049 | orchestrator | 2026-03-11 01:06:48 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:48.397078 | orchestrator | 2026-03-11 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:51.432283 | orchestrator | 2026-03-11 01:06:51 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:51.435543 | orchestrator | 2026-03-11 01:06:51 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:51.437667 | orchestrator | 2026-03-11 01:06:51 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:06:51.439139 | orchestrator | 2026-03-11 01:06:51 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:51.439329 | orchestrator | 2026-03-11 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:54.475275 | orchestrator | 2026-03-11 01:06:54 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:54.476119 | orchestrator | 2026-03-11 01:06:54 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:54.476744 | orchestrator | 2026-03-11 01:06:54 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:06:54.477641 | orchestrator | 2026-03-11 01:06:54 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:54.477666 | orchestrator | 2026-03-11 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:57.516152 | orchestrator | 2026-03-11 01:06:57 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:06:57.517633 | orchestrator | 2026-03-11 01:06:57 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:06:57.519325 | orchestrator | 2026-03-11 01:06:57 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:06:57.520793 | orchestrator | 2026-03-11 01:06:57 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:06:57.520836 | orchestrator | 2026-03-11 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:00.566654 | orchestrator | 2026-03-11 01:07:00 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:07:00.567393 | orchestrator | 2026-03-11 01:07:00 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:00.569203 | orchestrator | 2026-03-11 01:07:00 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:00.569831 | orchestrator | 2026-03-11 01:07:00 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:07:00.569864 | orchestrator | 2026-03-11 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:03.612685 | orchestrator | 2026-03-11 01:07:03 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:07:03.614720 | orchestrator | 2026-03-11 01:07:03 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:03.615942 | orchestrator | 2026-03-11 01:07:03 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:03.617479 | orchestrator | 2026-03-11 01:07:03 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:07:03.617700 | orchestrator | 2026-03-11 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:06.666193 | orchestrator | 2026-03-11 01:07:06 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state STARTED 2026-03-11 01:07:06.666243 | orchestrator | 2026-03-11 01:07:06 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:06.666248 | orchestrator | 2026-03-11 01:07:06 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:06.667112 | orchestrator | 2026-03-11 01:07:06 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:07:06.667279 | orchestrator | 2026-03-11 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:09.717317 | orchestrator | 2026-03-11 01:07:09 | INFO  | Task b75c17c1-fce5-4a83-ac98-64433a2c82ba is in state SUCCESS 2026-03-11 01:07:09.719582 | orchestrator | 2026-03-11 01:07:09.719632 | orchestrator | 2026-03-11 01:07:09.719639 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:07:09.719645 | orchestrator | 2026-03-11 01:07:09.719650 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:07:09.719656 | orchestrator | Wednesday 11 March 2026 01:04:18 +0000 (0:00:00.196) 0:00:00.196 ******* 2026-03-11 01:07:09.719661 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:07:09.719666 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:07:09.719671 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:07:09.719691 | orchestrator | 2026-03-11 01:07:09.719696 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:07:09.719701 | orchestrator | Wednesday 11 March 2026 01:04:18 +0000 (0:00:00.209) 0:00:00.406 ******* 2026-03-11 01:07:09.719706 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-11 01:07:09.719711 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-11 01:07:09.719716 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-11 01:07:09.719721 | orchestrator | 2026-03-11 01:07:09.719726 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-11 01:07:09.719731 | orchestrator | 2026-03-11 01:07:09.719735 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-11 01:07:09.719740 | orchestrator | Wednesday 11 March 2026 01:04:19 +0000 (0:00:00.308) 0:00:00.714 ******* 2026-03-11 01:07:09.719745 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:07:09.719750 | orchestrator | 2026-03-11 01:07:09.719755 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-11 01:07:09.719760 | orchestrator | Wednesday 11 March 2026 01:04:19 +0000 (0:00:00.418) 0:00:01.133 ******* 2026-03-11 01:07:09.719765 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-11 01:07:09.719770 | orchestrator | 2026-03-11 01:07:09.719775 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-11 01:07:09.719780 | orchestrator | Wednesday 11 March 2026 01:04:22 +0000 (0:00:03.492) 0:00:04.625 ******* 2026-03-11 01:07:09.719785 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-11 01:07:09.719790 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-11 01:07:09.719794 | orchestrator | 2026-03-11 01:07:09.719799 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-11 01:07:09.719804 | orchestrator | Wednesday 11 March 2026 01:04:29 +0000 (0:00:06.177) 0:00:10.803 ******* 2026-03-11 01:07:09.719809 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:07:09.719814 | orchestrator | 2026-03-11 01:07:09.719819 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-11 01:07:09.719824 | orchestrator | Wednesday 11 March 2026 01:04:31 +0000 (0:00:02.836) 0:00:13.639 ******* 2026-03-11 01:07:09.719829 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:07:09.719834 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-11 01:07:09.719839 | orchestrator | 2026-03-11 01:07:09.719844 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-11 01:07:09.719849 | orchestrator | Wednesday 11 March 2026 01:04:35 +0000 (0:00:03.311) 0:00:16.951 ******* 2026-03-11 01:07:09.719854 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:07:09.719858 | orchestrator | 2026-03-11 01:07:09.719863 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-11 01:07:09.719868 | orchestrator | Wednesday 11 March 2026 01:04:38 +0000 (0:00:02.976) 0:00:19.927 ******* 2026-03-11 01:07:09.719873 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-11 01:07:09.719878 | orchestrator | 2026-03-11 01:07:09.719883 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-11 01:07:09.719888 | orchestrator | Wednesday 11 March 2026 01:04:41 +0000 (0:00:03.087) 0:00:23.014 ******* 2026-03-11 01:07:09.720020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:07:09.720040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:07:09.720056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:07:09.720075 | orchestrator | 2026-03-11 01:07:09.720084 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-11 01:07:09.720091 | orchestrator | Wednesday 11 March 2026 01:04:44 +0000 (0:00:03.326) 0:00:26.341 ******* 2026-03-11 01:07:09.720099 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:07:09.720108 | orchestrator | 2026-03-11 01:07:09.720120 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-11 01:07:09.720128 | orchestrator | Wednesday 11 March 2026 01:04:45 +0000 (0:00:01.045) 0:00:27.386 ******* 2026-03-11 01:07:09.720137 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:07:09.720145 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:09.720154 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:07:09.720162 | orchestrator | 2026-03-11 01:07:09.720169 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-11 01:07:09.720176 | orchestrator | Wednesday 11 March 2026 01:04:50 +0000 (0:00:04.831) 0:00:32.218 ******* 2026-03-11 01:07:09.720183 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:07:09.720192 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:07:09.720200 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:07:09.720208 | orchestrator | 2026-03-11 01:07:09.720216 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-11 01:07:09.720224 | orchestrator | Wednesday 11 March 2026 01:04:52 +0000 (0:00:02.151) 0:00:34.369 ******* 2026-03-11 01:07:09.720233 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:07:09.720241 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:07:09.720248 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:07:09.720253 | orchestrator | 2026-03-11 01:07:09.720258 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-11 01:07:09.720270 | orchestrator | Wednesday 11 March 2026 01:04:54 +0000 (0:00:01.549) 0:00:35.919 ******* 2026-03-11 01:07:09.720275 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:07:09.720280 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:07:09.720285 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:07:09.720290 | orchestrator | 2026-03-11 01:07:09.720295 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-11 01:07:09.720299 | orchestrator | Wednesday 11 March 2026 01:04:55 +0000 (0:00:00.956) 0:00:36.875 ******* 2026-03-11 01:07:09.720307 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:09.720315 | orchestrator | 2026-03-11 01:07:09.720327 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-11 01:07:09.720336 | orchestrator | Wednesday 11 March 2026 01:04:55 +0000 (0:00:00.122) 0:00:36.998 ******* 2026-03-11 01:07:09.720344 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:09.720353 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:09.720361 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:09.720368 | orchestrator | 2026-03-11 01:07:09.720375 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-11 01:07:09.720389 | orchestrator | Wednesday 11 March 2026 01:04:55 +0000 (0:00:00.384) 0:00:37.382 ******* 2026-03-11 01:07:09.720397 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:07:09.720404 | orchestrator | 2026-03-11 01:07:09.720411 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-11 01:07:09.720419 | orchestrator | Wednesday 11 March 2026 01:04:56 +0000 (0:00:00.857) 0:00:38.240 ******* 2026-03-11 01:07:09.720442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:07:09.720515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:07:09.720526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:07:09.720536 | orchestrator | 2026-03-11 01:07:09.720542 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-11 01:07:09.720547 | orchestrator | Wednesday 11 March 2026 01:05:00 +0000 (0:00:04.455) 0:00:42.696 ******* 2026-03-11 01:07:09.720560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 01:07:09.720630 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:09.720649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 01:07:09.720665 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:09.720680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 01:07:09.720688 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:09.720696 | orchestrator | 2026-03-11 01:07:09.720704 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-11 01:07:09.720712 | orchestrator | Wednesday 11 March 2026 01:05:05 +0000 (0:00:04.932) 0:00:47.628 ******* 2026-03-11 01:07:09.720721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 01:07:09.720734 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:09.720751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 01:07:09.720760 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:09.720769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 01:07:09.720782 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:09.720791 | orchestrator | 2026-03-11 01:07:09.720799 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-11 01:07:09.720807 | orchestrator | Wednesday 11 March 2026 01:05:09 +0000 (0:00:03.765) 0:00:51.394 ******* 2026-03-11 01:07:09.720816 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:09.720821 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:09.720826 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:09.720831 | orchestrator | 2026-03-11 01:07:09.720836 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-11 01:07:09.720841 | orchestrator | Wednesday 11 March 2026 01:05:13 +0000 (0:00:03.863) 0:00:55.257 ******* 2026-03-11 01:07:09.720852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:07:09.720862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:07:09.720875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:07:09.720881 | orchestrator | 2026-03-11 01:07:09.720886 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-11 01:07:09.720891 | orchestrator | Wednesday 11 March 2026 01:05:18 +0000 (0:00:04.637) 0:00:59.894 ******* 2026-03-11 01:07:09.720896 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:07:09.720901 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:07:09.720905 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:09.720935 | orchestrator | 2026-03-11 01:07:09.720941 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-11 01:07:09.720946 | orchestrator | Wednesday 11 March 2026 01:05:25 +0000 (0:00:07.524) 0:01:07.419 ******* 2026-03-11 01:07:09.720950 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:09.720955 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:09.720968 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:09.720979 | orchestrator | 2026-03-11 01:07:09.720984 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-11 01:07:09.720989 | orchestrator | Wednesday 11 March 2026 01:05:32 +0000 (0:00:06.360) 0:01:13.779 ******* 2026-03-11 01:07:09.720996 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:09.721007 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:09.721016 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:09.721024 | orchestrator | 2026-03-11 01:07:09.721032 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-11 01:07:09.721041 | orchestrator | Wednesday 11 March 2026 01:05:36 +0000 (0:00:04.337) 0:01:18.117 ******* 2026-03-11 01:07:09.721049 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:09.721063 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:09.721068 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:09.721073 | orchestrator | 2026-03-11 01:07:09.721078 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-11 01:07:09.721083 | orchestrator | Wednesday 11 March 2026 01:05:39 +0000 (0:00:02.821) 0:01:20.938 ******* 2026-03-11 01:07:09.721088 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:09.721092 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:09.721097 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:09.721102 | orchestrator | 2026-03-11 01:07:09.721107 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-11 01:07:09.721112 | orchestrator | Wednesday 11 March 2026 01:05:43 +0000 (0:00:03.849) 0:01:24.787 ******* 2026-03-11 01:07:09.721117 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:09.721122 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:09.721127 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:09.721131 | orchestrator | 2026-03-11 01:07:09.721136 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-11 01:07:09.721141 | orchestrator | Wednesday 11 March 2026 01:05:43 +0000 (0:00:00.254) 0:01:25.042 ******* 2026-03-11 01:07:09.721148 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-11 01:07:09.721159 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:09.721170 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-11 01:07:09.721178 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:09.721186 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-11 01:07:09.721194 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:09.721201 | orchestrator | 2026-03-11 01:07:09.721210 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-11 01:07:09.721218 | orchestrator | Wednesday 11 March 2026 01:05:46 +0000 (0:00:02.910) 0:01:27.953 ******* 2026-03-11 01:07:09.721227 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:09.721235 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:07:09.721243 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:07:09.721251 | orchestrator | 2026-03-11 01:07:09.721260 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-11 01:07:09.721269 | orchestrator | Wednesday 11 March 2026 01:05:50 +0000 (0:00:03.926) 0:01:31.880 ******* 2026-03-11 01:07:09.721284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:07:09.721307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:07:09.721322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:07:09.721331 | orchestrator | 2026-03-11 01:07:09.721339 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-11 01:07:09.721347 | orchestrator | Wednesday 11 March 2026 01:05:54 +0000 (0:00:04.569) 0:01:36.449 ******* 2026-03-11 01:07:09.721355 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:09.721362 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:09.721370 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:09.721382 | orchestrator | 2026-03-11 01:07:09.721390 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-11 01:07:09.721398 | orchestrator | Wednesday 11 March 2026 01:05:55 +0000 (0:00:00.309) 0:01:36.758 ******* 2026-03-11 01:07:09.721405 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:09.721412 | orchestrator | 2026-03-11 01:07:09.721420 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-11 01:07:09.721428 | orchestrator | Wednesday 11 March 2026 01:05:57 +0000 (0:00:02.145) 0:01:38.904 ******* 2026-03-11 01:07:09.721435 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:09.721443 | orchestrator | 2026-03-11 01:07:09.721451 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-11 01:07:09.721459 | orchestrator | Wednesday 11 March 2026 01:05:59 +0000 (0:00:02.176) 0:01:41.081 ******* 2026-03-11 01:07:09.721467 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:09.721474 | orchestrator | 2026-03-11 01:07:09.721482 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-11 01:07:09.721489 | orchestrator | Wednesday 11 March 2026 01:06:01 +0000 (0:00:01.999) 0:01:43.080 ******* 2026-03-11 01:07:09.721497 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:09.721505 | orchestrator | 2026-03-11 01:07:09.721514 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-11 01:07:09.721529 | orchestrator | Wednesday 11 March 2026 01:06:32 +0000 (0:00:30.708) 0:02:13.789 ******* 2026-03-11 01:07:09.721538 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:09.721547 | orchestrator | 2026-03-11 01:07:09.721555 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-11 01:07:09.721563 | orchestrator | Wednesday 11 March 2026 01:06:34 +0000 (0:00:02.119) 0:02:15.908 ******* 2026-03-11 01:07:09.721572 | orchestrator | 2026-03-11 01:07:09.721581 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-11 01:07:09.721589 | orchestrator | Wednesday 11 March 2026 01:06:35 +0000 (0:00:01.076) 0:02:16.985 ******* 2026-03-11 01:07:09.721597 | orchestrator | 2026-03-11 01:07:09.721605 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-11 01:07:09.721614 | orchestrator | Wednesday 11 March 2026 01:06:35 +0000 (0:00:00.270) 0:02:17.255 ******* 2026-03-11 01:07:09.721622 | orchestrator | 2026-03-11 01:07:09.721630 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-11 01:07:09.721639 | orchestrator | Wednesday 11 March 2026 01:06:35 +0000 (0:00:00.105) 0:02:17.361 ******* 2026-03-11 01:07:09.721648 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:09.721656 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:07:09.721664 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:07:09.721673 | orchestrator | 2026-03-11 01:07:09.721680 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:07:09.721690 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-11 01:07:09.721699 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-11 01:07:09.721709 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-11 01:07:09.721714 | orchestrator | 2026-03-11 01:07:09.721719 | orchestrator | 2026-03-11 01:07:09.721724 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:07:09.721729 | orchestrator | Wednesday 11 March 2026 01:07:08 +0000 (0:00:32.784) 0:02:50.145 ******* 2026-03-11 01:07:09.721734 | orchestrator | =============================================================================== 2026-03-11 01:07:09.721739 | orchestrator | glance : Restart glance-api container ---------------------------------- 32.78s 2026-03-11 01:07:09.721744 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.71s 2026-03-11 01:07:09.721761 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.52s 2026-03-11 01:07:09.721766 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.36s 2026-03-11 01:07:09.721771 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.18s 2026-03-11 01:07:09.721776 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.93s 2026-03-11 01:07:09.721781 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.83s 2026-03-11 01:07:09.721786 | orchestrator | glance : Copying over config.json files for services -------------------- 4.64s 2026-03-11 01:07:09.721791 | orchestrator | glance : Check glance containers ---------------------------------------- 4.57s 2026-03-11 01:07:09.721796 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.46s 2026-03-11 01:07:09.721801 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.34s 2026-03-11 01:07:09.721806 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.93s 2026-03-11 01:07:09.721811 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.86s 2026-03-11 01:07:09.721815 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.85s 2026-03-11 01:07:09.721821 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.77s 2026-03-11 01:07:09.721830 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.49s 2026-03-11 01:07:09.721835 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.33s 2026-03-11 01:07:09.721840 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.31s 2026-03-11 01:07:09.721845 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.09s 2026-03-11 01:07:09.721850 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 2.98s 2026-03-11 01:07:09.721855 | orchestrator | 2026-03-11 01:07:09 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:09.721861 | orchestrator | 2026-03-11 01:07:09 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:09.721872 | orchestrator | 2026-03-11 01:07:09 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state STARTED 2026-03-11 01:07:09.721884 | orchestrator | 2026-03-11 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:12.768003 | orchestrator | 2026-03-11 01:07:12 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:12.769206 | orchestrator | 2026-03-11 01:07:12 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:12.770714 | orchestrator | 2026-03-11 01:07:12 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:12.774262 | orchestrator | 2026-03-11 01:07:12 | INFO  | Task 5b5807c8-6f0f-4fa2-9cc4-369a976809ab is in state SUCCESS 2026-03-11 01:07:12.774746 | orchestrator | 2026-03-11 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:12.776442 | orchestrator | 2026-03-11 01:07:12.776470 | orchestrator | 2026-03-11 01:07:12.776474 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:07:12.776479 | orchestrator | 2026-03-11 01:07:12.776483 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:07:12.776487 | orchestrator | Wednesday 11 March 2026 01:04:22 +0000 (0:00:00.186) 0:00:00.186 ******* 2026-03-11 01:07:12.776491 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:07:12.776495 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:07:12.776499 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:07:12.776503 | orchestrator | 2026-03-11 01:07:12.776507 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:07:12.776511 | orchestrator | Wednesday 11 March 2026 01:04:23 +0000 (0:00:00.248) 0:00:00.434 ******* 2026-03-11 01:07:12.776527 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-11 01:07:12.776531 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-11 01:07:12.776535 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-11 01:07:12.776539 | orchestrator | 2026-03-11 01:07:12.776550 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-11 01:07:12.776557 | orchestrator | 2026-03-11 01:07:12.776568 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-11 01:07:12.776576 | orchestrator | Wednesday 11 March 2026 01:04:23 +0000 (0:00:00.311) 0:00:00.746 ******* 2026-03-11 01:07:12.776582 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:07:12.776589 | orchestrator | 2026-03-11 01:07:12.776595 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-11 01:07:12.776601 | orchestrator | Wednesday 11 March 2026 01:04:23 +0000 (0:00:00.401) 0:00:01.148 ******* 2026-03-11 01:07:12.776607 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-11 01:07:12.776614 | orchestrator | 2026-03-11 01:07:12.776620 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-11 01:07:12.776626 | orchestrator | Wednesday 11 March 2026 01:04:26 +0000 (0:00:03.166) 0:00:04.314 ******* 2026-03-11 01:07:12.776633 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-11 01:07:12.776639 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-11 01:07:12.776646 | orchestrator | 2026-03-11 01:07:12.776652 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-11 01:07:12.776659 | orchestrator | Wednesday 11 March 2026 01:04:32 +0000 (0:00:05.831) 0:00:10.145 ******* 2026-03-11 01:07:12.776665 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:07:12.776672 | orchestrator | 2026-03-11 01:07:12.776723 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-11 01:07:12.776731 | orchestrator | Wednesday 11 March 2026 01:04:35 +0000 (0:00:02.820) 0:00:12.966 ******* 2026-03-11 01:07:12.776777 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:07:12.776782 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-11 01:07:12.776786 | orchestrator | 2026-03-11 01:07:12.776790 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-11 01:07:12.776794 | orchestrator | Wednesday 11 March 2026 01:04:39 +0000 (0:00:03.464) 0:00:16.431 ******* 2026-03-11 01:07:12.776798 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:07:12.776801 | orchestrator | 2026-03-11 01:07:12.776805 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-11 01:07:12.776809 | orchestrator | Wednesday 11 March 2026 01:04:41 +0000 (0:00:02.895) 0:00:19.326 ******* 2026-03-11 01:07:12.776813 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-11 01:07:12.776824 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-11 01:07:12.776828 | orchestrator | 2026-03-11 01:07:12.776832 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-11 01:07:12.776836 | orchestrator | Wednesday 11 March 2026 01:04:48 +0000 (0:00:06.473) 0:00:25.800 ******* 2026-03-11 01:07:12.776842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:07:12.776861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:07:12.776866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:07:12.776870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.776877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.776881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.776892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.776902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.776951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.776959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.777185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.777199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.777209 | orchestrator | 2026-03-11 01:07:12.777214 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-11 01:07:12.777219 | orchestrator | Wednesday 11 March 2026 01:04:50 +0000 (0:00:02.382) 0:00:28.183 ******* 2026-03-11 01:07:12.777223 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:12.777227 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:12.777238 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:12.777246 | orchestrator | 2026-03-11 01:07:12.777250 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-11 01:07:12.777254 | orchestrator | Wednesday 11 March 2026 01:04:51 +0000 (0:00:00.548) 0:00:28.732 ******* 2026-03-11 01:07:12.777258 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:07:12.777262 | orchestrator | 2026-03-11 01:07:12.777270 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-11 01:07:12.777274 | orchestrator | Wednesday 11 March 2026 01:04:52 +0000 (0:00:01.290) 0:00:30.022 ******* 2026-03-11 01:07:12.777278 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-11 01:07:12.777297 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-11 01:07:12.777301 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-11 01:07:12.777305 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-11 01:07:12.777309 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-11 01:07:12.777313 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-11 01:07:12.777317 | orchestrator | 2026-03-11 01:07:12.777320 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-11 01:07:12.777324 | orchestrator | Wednesday 11 March 2026 01:04:54 +0000 (0:00:02.354) 0:00:32.377 ******* 2026-03-11 01:07:12.777329 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-11 01:07:12.777334 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-11 01:07:12.777342 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-11 01:07:12.777349 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-11 01:07:12.777357 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-11 01:07:12.777361 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-11 01:07:12.777365 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-11 01:07:12.777376 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-11 01:07:12.777380 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-11 01:07:12.777388 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-11 01:07:12.777393 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-11 01:07:12.777397 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-11 01:07:12.777400 | orchestrator | 2026-03-11 01:07:12.777404 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-11 01:07:12.777436 | orchestrator | Wednesday 11 March 2026 01:04:59 +0000 (0:00:04.235) 0:00:36.612 ******* 2026-03-11 01:07:12.777442 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:07:12.777446 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:07:12.777450 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:07:12.777454 | orchestrator | 2026-03-11 01:07:12.777457 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-11 01:07:12.777461 | orchestrator | Wednesday 11 March 2026 01:05:01 +0000 (0:00:01.867) 0:00:38.479 ******* 2026-03-11 01:07:12.777467 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-11 01:07:12.777472 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-11 01:07:12.777475 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-11 01:07:12.777479 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-11 01:07:12.777483 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-11 01:07:12.777486 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-11 01:07:12.777490 | orchestrator | 2026-03-11 01:07:12.777494 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-11 01:07:12.777498 | orchestrator | Wednesday 11 March 2026 01:05:04 +0000 (0:00:03.893) 0:00:42.373 ******* 2026-03-11 01:07:12.777501 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-11 01:07:12.777505 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-11 01:07:12.777509 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-11 01:07:12.777513 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-11 01:07:12.777517 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-11 01:07:12.777521 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-11 01:07:12.777524 | orchestrator | 2026-03-11 01:07:12.777528 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-11 01:07:12.777532 | orchestrator | Wednesday 11 March 2026 01:05:06 +0000 (0:00:01.112) 0:00:43.485 ******* 2026-03-11 01:07:12.777535 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:12.777539 | orchestrator | 2026-03-11 01:07:12.777543 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-11 01:07:12.777547 | orchestrator | Wednesday 11 March 2026 01:05:06 +0000 (0:00:00.180) 0:00:43.665 ******* 2026-03-11 01:07:12.777550 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:12.777554 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:12.777560 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:12.777564 | orchestrator | 2026-03-11 01:07:12.777568 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-11 01:07:12.777572 | orchestrator | Wednesday 11 March 2026 01:05:06 +0000 (0:00:00.408) 0:00:44.074 ******* 2026-03-11 01:07:12.777575 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:07:12.777579 | orchestrator | 2026-03-11 01:07:12.777583 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-11 01:07:12.777691 | orchestrator | Wednesday 11 March 2026 01:05:07 +0000 (0:00:00.697) 0:00:44.772 ******* 2026-03-11 01:07:12.777696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:07:12.777704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:07:12.777711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:07:12.777715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.777730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.777735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.777741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.777745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.777751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.777755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.777762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.777766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.777773 | orchestrator | 2026-03-11 01:07:12.777777 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-11 01:07:12.777780 | orchestrator | Wednesday 11 March 2026 01:05:12 +0000 (0:00:04.804) 0:00:49.576 ******* 2026-03-11 01:07:12.777784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:07:12.777790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777808 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:12.777813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:07:12.777822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:07:12.777843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777854 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:12.777858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777862 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:12.777866 | orchestrator | 2026-03-11 01:07:12.777869 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-11 01:07:12.777873 | orchestrator | Wednesday 11 March 2026 01:05:13 +0000 (0:00:00.955) 0:00:50.532 ******* 2026-03-11 01:07:12.777879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:07:12.777883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777900 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:12.777915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:07:12.777920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777939 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:12.777943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:07:12.777947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.777961 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:12.777965 | orchestrator | 2026-03-11 01:07:12.777969 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-11 01:07:12.777973 | orchestrator | Wednesday 11 March 2026 01:05:14 +0000 (0:00:01.663) 0:00:52.195 ******* 2026-03-11 01:07:12.777977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:07:12.777986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:07:12.777990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:07:12.777994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778067 | orchestrator | 2026-03-11 01:07:12.778071 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-11 01:07:12.778075 | orchestrator | Wednesday 11 March 2026 01:05:19 +0000 (0:00:04.321) 0:00:56.517 ******* 2026-03-11 01:07:12.778079 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-11 01:07:12.778085 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-11 01:07:12.778089 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-11 01:07:12.778093 | orchestrator | 2026-03-11 01:07:12.778097 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-11 01:07:12.778100 | orchestrator | Wednesday 11 March 2026 01:05:21 +0000 (0:00:02.215) 0:00:58.733 ******* 2026-03-11 01:07:12.778104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:07:12.778108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:07:12.778114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:07:12.778121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778168 | orchestrator | 2026-03-11 01:07:12.778172 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-11 01:07:12.778176 | orchestrator | Wednesday 11 March 2026 01:05:37 +0000 (0:00:15.840) 0:01:14.573 ******* 2026-03-11 01:07:12.778180 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:07:12.778184 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:12.778187 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:07:12.778191 | orchestrator | 2026-03-11 01:07:12.778195 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-11 01:07:12.778199 | orchestrator | Wednesday 11 March 2026 01:05:38 +0000 (0:00:01.663) 0:01:16.236 ******* 2026-03-11 01:07:12.778203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:07:12.778211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.778215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.778221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.778225 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:12.778229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:07:12.778233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.778237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.778245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.778249 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:12.778255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:07:12.778260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.778264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.778267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:07:12.778274 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:12.778278 | orchestrator | 2026-03-11 01:07:12.778282 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-11 01:07:12.778285 | orchestrator | Wednesday 11 March 2026 01:05:39 +0000 (0:00:00.569) 0:01:16.806 ******* 2026-03-11 01:07:12.778289 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:12.778293 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:12.778297 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:12.778300 | orchestrator | 2026-03-11 01:07:12.778304 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-11 01:07:12.778308 | orchestrator | Wednesday 11 March 2026 01:05:39 +0000 (0:00:00.351) 0:01:17.158 ******* 2026-03-11 01:07:12.778314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:07:12.778321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:07:12.778329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:07:12.778336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:07:12.778418 | orchestrator | 2026-03-11 01:07:12.778424 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-11 01:07:12.778429 | orchestrator | Wednesday 11 March 2026 01:05:43 +0000 (0:00:03.438) 0:01:20.596 ******* 2026-03-11 01:07:12.778433 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:12.778438 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:12.778442 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:12.778447 | orchestrator | 2026-03-11 01:07:12.778451 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-11 01:07:12.778456 | orchestrator | Wednesday 11 March 2026 01:05:43 +0000 (0:00:00.413) 0:01:21.010 ******* 2026-03-11 01:07:12.778459 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:12.778463 | orchestrator | 2026-03-11 01:07:12.778467 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-11 01:07:12.778471 | orchestrator | Wednesday 11 March 2026 01:05:45 +0000 (0:00:01.973) 0:01:22.983 ******* 2026-03-11 01:07:12.778474 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:12.778478 | orchestrator | 2026-03-11 01:07:12.778482 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-11 01:07:12.778489 | orchestrator | Wednesday 11 March 2026 01:05:47 +0000 (0:00:02.071) 0:01:25.055 ******* 2026-03-11 01:07:12.778493 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:12.778496 | orchestrator | 2026-03-11 01:07:12.778500 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-11 01:07:12.778504 | orchestrator | Wednesday 11 March 2026 01:06:05 +0000 (0:00:18.111) 0:01:43.166 ******* 2026-03-11 01:07:12.778508 | orchestrator | 2026-03-11 01:07:12.778512 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-11 01:07:12.778515 | orchestrator | Wednesday 11 March 2026 01:06:05 +0000 (0:00:00.051) 0:01:43.217 ******* 2026-03-11 01:07:12.778519 | orchestrator | 2026-03-11 01:07:12.778523 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-11 01:07:12.778529 | orchestrator | Wednesday 11 March 2026 01:06:05 +0000 (0:00:00.050) 0:01:43.268 ******* 2026-03-11 01:07:12.778533 | orchestrator | 2026-03-11 01:07:12.778537 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-11 01:07:12.778541 | orchestrator | Wednesday 11 March 2026 01:06:05 +0000 (0:00:00.052) 0:01:43.320 ******* 2026-03-11 01:07:12.778545 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:12.778548 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:07:12.778552 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:07:12.778556 | orchestrator | 2026-03-11 01:07:12.778560 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-11 01:07:12.778564 | orchestrator | Wednesday 11 March 2026 01:06:24 +0000 (0:00:18.213) 0:02:01.534 ******* 2026-03-11 01:07:12.778567 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:12.778571 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:07:12.778575 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:07:12.778579 | orchestrator | 2026-03-11 01:07:12.778582 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-11 01:07:12.778586 | orchestrator | Wednesday 11 March 2026 01:06:33 +0000 (0:00:09.726) 0:02:11.260 ******* 2026-03-11 01:07:12.778590 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:12.778593 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:07:12.778597 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:07:12.778601 | orchestrator | 2026-03-11 01:07:12.778604 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-11 01:07:12.778608 | orchestrator | Wednesday 11 March 2026 01:06:58 +0000 (0:00:24.148) 0:02:35.409 ******* 2026-03-11 01:07:12.778612 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:12.778616 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:07:12.778619 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:07:12.778623 | orchestrator | 2026-03-11 01:07:12.778627 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-11 01:07:12.778630 | orchestrator | Wednesday 11 March 2026 01:07:10 +0000 (0:00:12.472) 0:02:47.881 ******* 2026-03-11 01:07:12.778634 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:12.778638 | orchestrator | 2026-03-11 01:07:12.778642 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:07:12.778646 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-11 01:07:12.778651 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:07:12.778654 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:07:12.778658 | orchestrator | 2026-03-11 01:07:12.778662 | orchestrator | 2026-03-11 01:07:12.778666 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:07:12.778669 | orchestrator | Wednesday 11 March 2026 01:07:10 +0000 (0:00:00.254) 0:02:48.136 ******* 2026-03-11 01:07:12.778675 | orchestrator | =============================================================================== 2026-03-11 01:07:12.778679 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 24.15s 2026-03-11 01:07:12.778683 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 18.21s 2026-03-11 01:07:12.778687 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.11s 2026-03-11 01:07:12.778690 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 15.84s 2026-03-11 01:07:12.778694 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.47s 2026-03-11 01:07:12.778698 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.73s 2026-03-11 01:07:12.778701 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.47s 2026-03-11 01:07:12.778708 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.83s 2026-03-11 01:07:12.778712 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.80s 2026-03-11 01:07:12.778715 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.32s 2026-03-11 01:07:12.778719 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.24s 2026-03-11 01:07:12.778723 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.89s 2026-03-11 01:07:12.778726 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.46s 2026-03-11 01:07:12.778730 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.44s 2026-03-11 01:07:12.778734 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.17s 2026-03-11 01:07:12.778737 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 2.90s 2026-03-11 01:07:12.778741 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.82s 2026-03-11 01:07:12.778747 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.38s 2026-03-11 01:07:12.778751 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.35s 2026-03-11 01:07:12.778755 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.22s 2026-03-11 01:07:15.820676 | orchestrator | 2026-03-11 01:07:15 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:15.822523 | orchestrator | 2026-03-11 01:07:15 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:15.823430 | orchestrator | 2026-03-11 01:07:15 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:15.823525 | orchestrator | 2026-03-11 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:18.872301 | orchestrator | 2026-03-11 01:07:18 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:18.874304 | orchestrator | 2026-03-11 01:07:18 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:18.876253 | orchestrator | 2026-03-11 01:07:18 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:18.876320 | orchestrator | 2026-03-11 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:21.915136 | orchestrator | 2026-03-11 01:07:21 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:21.919281 | orchestrator | 2026-03-11 01:07:21 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:21.919345 | orchestrator | 2026-03-11 01:07:21 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:21.919739 | orchestrator | 2026-03-11 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:24.961098 | orchestrator | 2026-03-11 01:07:24 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:24.963173 | orchestrator | 2026-03-11 01:07:24 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:24.965340 | orchestrator | 2026-03-11 01:07:24 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:24.965396 | orchestrator | 2026-03-11 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:28.019553 | orchestrator | 2026-03-11 01:07:28 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:28.022613 | orchestrator | 2026-03-11 01:07:28 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:28.025435 | orchestrator | 2026-03-11 01:07:28 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:28.025495 | orchestrator | 2026-03-11 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:31.066460 | orchestrator | 2026-03-11 01:07:31 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:31.068744 | orchestrator | 2026-03-11 01:07:31 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:31.069776 | orchestrator | 2026-03-11 01:07:31 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:31.069825 | orchestrator | 2026-03-11 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:34.113138 | orchestrator | 2026-03-11 01:07:34 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:34.113945 | orchestrator | 2026-03-11 01:07:34 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:34.115130 | orchestrator | 2026-03-11 01:07:34 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:34.115164 | orchestrator | 2026-03-11 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:37.169651 | orchestrator | 2026-03-11 01:07:37 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:37.171190 | orchestrator | 2026-03-11 01:07:37 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:37.174439 | orchestrator | 2026-03-11 01:07:37 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:37.174498 | orchestrator | 2026-03-11 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:40.219495 | orchestrator | 2026-03-11 01:07:40 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:40.221776 | orchestrator | 2026-03-11 01:07:40 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:40.224185 | orchestrator | 2026-03-11 01:07:40 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:40.224237 | orchestrator | 2026-03-11 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:43.267305 | orchestrator | 2026-03-11 01:07:43 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:43.268798 | orchestrator | 2026-03-11 01:07:43 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:43.270521 | orchestrator | 2026-03-11 01:07:43 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:43.270564 | orchestrator | 2026-03-11 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:46.313023 | orchestrator | 2026-03-11 01:07:46 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:46.317070 | orchestrator | 2026-03-11 01:07:46 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:46.319032 | orchestrator | 2026-03-11 01:07:46 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:46.319080 | orchestrator | 2026-03-11 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:49.354309 | orchestrator | 2026-03-11 01:07:49 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:49.355114 | orchestrator | 2026-03-11 01:07:49 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:49.356302 | orchestrator | 2026-03-11 01:07:49 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:49.356328 | orchestrator | 2026-03-11 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:52.396584 | orchestrator | 2026-03-11 01:07:52 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:52.398594 | orchestrator | 2026-03-11 01:07:52 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:52.401568 | orchestrator | 2026-03-11 01:07:52 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:52.401610 | orchestrator | 2026-03-11 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:55.436620 | orchestrator | 2026-03-11 01:07:55 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:55.438169 | orchestrator | 2026-03-11 01:07:55 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:55.439515 | orchestrator | 2026-03-11 01:07:55 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:55.439552 | orchestrator | 2026-03-11 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:58.482580 | orchestrator | 2026-03-11 01:07:58 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:07:58.484070 | orchestrator | 2026-03-11 01:07:58 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:07:58.485457 | orchestrator | 2026-03-11 01:07:58 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:07:58.485487 | orchestrator | 2026-03-11 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:01.517223 | orchestrator | 2026-03-11 01:08:01 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:01.519221 | orchestrator | 2026-03-11 01:08:01 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:01.520690 | orchestrator | 2026-03-11 01:08:01 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:01.520729 | orchestrator | 2026-03-11 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:04.560227 | orchestrator | 2026-03-11 01:08:04 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:04.561287 | orchestrator | 2026-03-11 01:08:04 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:04.562182 | orchestrator | 2026-03-11 01:08:04 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:04.562216 | orchestrator | 2026-03-11 01:08:04 | [1mINFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:07.599525 | orchestrator | 2026-03-11 01:08:07 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:07.601262 | orchestrator | 2026-03-11 01:08:07 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:07.602600 | orchestrator | 2026-03-11 01:08:07 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:07.602665 | orchestrator | 2026-03-11 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:10.651291 | orchestrator | 2026-03-11 01:08:10 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:10.652418 | orchestrator | 2026-03-11 01:08:10 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:10.654651 | orchestrator | 2026-03-11 01:08:10 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:10.654713 | orchestrator | 2026-03-11 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:13.705333 | orchestrator | 2026-03-11 01:08:13 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:13.706429 | orchestrator | 2026-03-11 01:08:13 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:13.707832 | orchestrator | 2026-03-11 01:08:13 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:13.707870 | orchestrator | 2026-03-11 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:16.748338 | orchestrator | 2026-03-11 01:08:16 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:16.749705 | orchestrator | 2026-03-11 01:08:16 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:16.751730 | orchestrator | 2026-03-11 01:08:16 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:16.751777 | orchestrator | 2026-03-11 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:19.791781 | orchestrator | 2026-03-11 01:08:19 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:19.792304 | orchestrator | 2026-03-11 01:08:19 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:19.793091 | orchestrator | 2026-03-11 01:08:19 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:19.793126 | orchestrator | 2026-03-11 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:22.833706 | orchestrator | 2026-03-11 01:08:22 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:22.836612 | orchestrator | 2026-03-11 01:08:22 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:22.838395 | orchestrator | 2026-03-11 01:08:22 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:22.838467 | orchestrator | 2026-03-11 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:25.877308 | orchestrator | 2026-03-11 01:08:25 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:25.879591 | orchestrator | 2026-03-11 01:08:25 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:25.881724 | orchestrator | 2026-03-11 01:08:25 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:25.881785 | orchestrator | 2026-03-11 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:28.920366 | orchestrator | 2026-03-11 01:08:28 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:28.921517 | orchestrator | 2026-03-11 01:08:28 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:28.925135 | orchestrator | 2026-03-11 01:08:28 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:28.925180 | orchestrator | 2026-03-11 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:31.960624 | orchestrator | 2026-03-11 01:08:31 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:31.962218 | orchestrator | 2026-03-11 01:08:31 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:31.963989 | orchestrator | 2026-03-11 01:08:31 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:31.964028 | orchestrator | 2026-03-11 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:35.004444 | orchestrator | 2026-03-11 01:08:35 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:35.005545 | orchestrator | 2026-03-11 01:08:35 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:35.007077 | orchestrator | 2026-03-11 01:08:35 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:35.007159 | orchestrator | 2026-03-11 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:38.064319 | orchestrator | 2026-03-11 01:08:38 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:38.067975 | orchestrator | 2026-03-11 01:08:38 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:38.070147 | orchestrator | 2026-03-11 01:08:38 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:38.070189 | orchestrator | 2026-03-11 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:41.121269 | orchestrator | 2026-03-11 01:08:41 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:41.122650 | orchestrator | 2026-03-11 01:08:41 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:41.123804 | orchestrator | 2026-03-11 01:08:41 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:41.124068 | orchestrator | 2026-03-11 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:44.165202 | orchestrator | 2026-03-11 01:08:44 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:44.165548 | orchestrator | 2026-03-11 01:08:44 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:44.166332 | orchestrator | 2026-03-11 01:08:44 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:44.166489 | orchestrator | 2026-03-11 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:47.228497 | orchestrator | 2026-03-11 01:08:47 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:47.230307 | orchestrator | 2026-03-11 01:08:47 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:47.232030 | orchestrator | 2026-03-11 01:08:47 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:47.232267 | orchestrator | 2026-03-11 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:50.288562 | orchestrator | 2026-03-11 01:08:50 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:50.289998 | orchestrator | 2026-03-11 01:08:50 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:50.291512 | orchestrator | 2026-03-11 01:08:50 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state STARTED 2026-03-11 01:08:50.291811 | orchestrator | 2026-03-11 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:53.335677 | orchestrator | 2026-03-11 01:08:53 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:53.336096 | orchestrator | 2026-03-11 01:08:53 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:53.337091 | orchestrator | 2026-03-11 01:08:53 | INFO  | Task 73dfe611-6db2-4997-90f8-1d18538925e8 is in state SUCCESS 2026-03-11 01:08:53.337128 | orchestrator | 2026-03-11 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:56.391095 | orchestrator | 2026-03-11 01:08:56 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:56.392624 | orchestrator | 2026-03-11 01:08:56 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:56.394300 | orchestrator | 2026-03-11 01:08:56 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:08:56.394342 | orchestrator | 2026-03-11 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:59.455180 | orchestrator | 2026-03-11 01:08:59 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:08:59.456932 | orchestrator | 2026-03-11 01:08:59 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:08:59.458587 | orchestrator | 2026-03-11 01:08:59 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:08:59.458630 | orchestrator | 2026-03-11 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:02.494314 | orchestrator | 2026-03-11 01:09:02 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:09:02.497171 | orchestrator | 2026-03-11 01:09:02 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:02.498434 | orchestrator | 2026-03-11 01:09:02 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:02.498748 | orchestrator | 2026-03-11 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:05.542121 | orchestrator | 2026-03-11 01:09:05 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:09:05.544033 | orchestrator | 2026-03-11 01:09:05 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:05.545257 | orchestrator | 2026-03-11 01:09:05 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:05.545390 | orchestrator | 2026-03-11 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:08.584775 | orchestrator | 2026-03-11 01:09:08 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:09:08.585168 | orchestrator | 2026-03-11 01:09:08 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:08.587151 | orchestrator | 2026-03-11 01:09:08 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:08.587490 | orchestrator | 2026-03-11 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:11.635683 | orchestrator | 2026-03-11 01:09:11 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state STARTED 2026-03-11 01:09:11.636420 | orchestrator | 2026-03-11 01:09:11 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:11.641526 | orchestrator | 2026-03-11 01:09:11 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:11.641578 | orchestrator | 2026-03-11 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:14.687671 | orchestrator | 2026-03-11 01:09:14.687796 | orchestrator | 2026-03-11 01:09:14.687806 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:09:14.687814 | orchestrator | 2026-03-11 01:09:14.687821 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:09:14.687828 | orchestrator | Wednesday 11 March 2026 01:06:46 +0000 (0:00:00.144) 0:00:00.144 ******* 2026-03-11 01:09:14.687835 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:09:14.687842 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:09:14.687848 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:09:14.687855 | orchestrator | 2026-03-11 01:09:14.687861 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:09:14.687867 | orchestrator | Wednesday 11 March 2026 01:06:46 +0000 (0:00:00.262) 0:00:00.406 ******* 2026-03-11 01:09:14.687890 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-11 01:09:14.687897 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-11 01:09:14.687904 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-11 01:09:14.687910 | orchestrator | 2026-03-11 01:09:14.687917 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-11 01:09:14.687924 | orchestrator | 2026-03-11 01:09:14.687930 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-11 01:09:14.687964 | orchestrator | Wednesday 11 March 2026 01:06:47 +0000 (0:00:00.636) 0:00:01.042 ******* 2026-03-11 01:09:14.687971 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:09:14.688037 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:09:14.688046 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:09:14.688086 | orchestrator | 2026-03-11 01:09:14.688102 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:09:14.688110 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:09:14.688118 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:09:14.688124 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:09:14.688130 | orchestrator | 2026-03-11 01:09:14.688170 | orchestrator | 2026-03-11 01:09:14.688179 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:09:14.688185 | orchestrator | Wednesday 11 March 2026 01:08:52 +0000 (0:02:04.854) 0:02:05.897 ******* 2026-03-11 01:09:14.688192 | orchestrator | =============================================================================== 2026-03-11 01:09:14.688244 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 124.85s 2026-03-11 01:09:14.688251 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-03-11 01:09:14.688258 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-03-11 01:09:14.688264 | orchestrator | 2026-03-11 01:09:14.688271 | orchestrator | 2026-03-11 01:09:14.688278 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:09:14.688284 | orchestrator | 2026-03-11 01:09:14.688291 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:09:14.688510 | orchestrator | Wednesday 11 March 2026 01:07:13 +0000 (0:00:00.292) 0:00:00.292 ******* 2026-03-11 01:09:14.688515 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:09:14.688519 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:09:14.688524 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:09:14.688528 | orchestrator | 2026-03-11 01:09:14.688532 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:09:14.688537 | orchestrator | Wednesday 11 March 2026 01:07:13 +0000 (0:00:00.335) 0:00:00.627 ******* 2026-03-11 01:09:14.688542 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-11 01:09:14.688546 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-11 01:09:14.688574 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-11 01:09:14.688579 | orchestrator | 2026-03-11 01:09:14.688583 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-11 01:09:14.688588 | orchestrator | 2026-03-11 01:09:14.688592 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-11 01:09:14.688597 | orchestrator | Wednesday 11 March 2026 01:07:14 +0000 (0:00:00.439) 0:00:01.067 ******* 2026-03-11 01:09:14.688601 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:09:14.688606 | orchestrator | 2026-03-11 01:09:14.688611 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-11 01:09:14.688629 | orchestrator | Wednesday 11 March 2026 01:07:14 +0000 (0:00:00.547) 0:00:01.614 ******* 2026-03-11 01:09:14.688644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:09:14.688866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:09:14.688878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:09:14.688883 | orchestrator | 2026-03-11 01:09:14.688887 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-11 01:09:14.688891 | orchestrator | Wednesday 11 March 2026 01:07:15 +0000 (0:00:00.749) 0:00:02.364 ******* 2026-03-11 01:09:14.688895 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-11 01:09:14.688899 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-11 01:09:14.688903 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:09:14.688907 | orchestrator | 2026-03-11 01:09:14.688911 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-11 01:09:14.688915 | orchestrator | Wednesday 11 March 2026 01:07:16 +0000 (0:00:00.737) 0:00:03.101 ******* 2026-03-11 01:09:14.688919 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:09:14.688923 | orchestrator | 2026-03-11 01:09:14.688926 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-11 01:09:14.688930 | orchestrator | Wednesday 11 March 2026 01:07:16 +0000 (0:00:00.671) 0:00:03.772 ******* 2026-03-11 01:09:14.688934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:09:14.688938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:09:14.688947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:09:14.688951 | orchestrator | 2026-03-11 01:09:14.688968 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-11 01:09:14.688977 | orchestrator | Wednesday 11 March 2026 01:07:18 +0000 (0:00:01.387) 0:00:05.159 ******* 2026-03-11 01:09:14.688987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 01:09:14.688997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 01:09:14.689004 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:09:14.689010 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:09:14.689016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 01:09:14.689022 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:09:14.689027 | orchestrator | 2026-03-11 01:09:14.689063 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-11 01:09:14.689075 | orchestrator | Wednesday 11 March 2026 01:07:18 +0000 (0:00:00.310) 0:00:05.470 ******* 2026-03-11 01:09:14.689083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 01:09:14.689089 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:09:14.689096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 01:09:14.689103 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:09:14.689130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 01:09:14.689135 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:09:14.689139 | orchestrator | 2026-03-11 01:09:14.689143 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-11 01:09:14.689146 | orchestrator | Wednesday 11 March 2026 01:07:19 +0000 (0:00:00.644) 0:00:06.114 ******* 2026-03-11 01:09:14.689174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:09:14.689180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:09:14.689188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:09:14.689192 | orchestrator | 2026-03-11 01:09:14.689196 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-11 01:09:14.689200 | orchestrator | Wednesday 11 March 2026 01:07:20 +0000 (0:00:01.144) 0:00:07.259 ******* 2026-03-11 01:09:14.689203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:09:14.689220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:09:14.689225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:09:14.689229 | orchestrator | 2026-03-11 01:09:14.689232 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-11 01:09:14.689236 | orchestrator | Wednesday 11 March 2026 01:07:21 +0000 (0:00:01.264) 0:00:08.523 ******* 2026-03-11 01:09:14.689240 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:09:14.689246 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:09:14.689250 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:09:14.689254 | orchestrator | 2026-03-11 01:09:14.689258 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-11 01:09:14.689269 | orchestrator | Wednesday 11 March 2026 01:07:21 +0000 (0:00:00.400) 0:00:08.923 ******* 2026-03-11 01:09:14.689273 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-11 01:09:14.689281 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-11 01:09:14.689288 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-11 01:09:14.689292 | orchestrator | 2026-03-11 01:09:14.689296 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-11 01:09:14.689300 | orchestrator | Wednesday 11 March 2026 01:07:23 +0000 (0:00:01.198) 0:00:10.121 ******* 2026-03-11 01:09:14.689304 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-11 01:09:14.689308 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-11 01:09:14.689312 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-11 01:09:14.689316 | orchestrator | 2026-03-11 01:09:14.689319 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-11 01:09:14.689323 | orchestrator | Wednesday 11 March 2026 01:07:24 +0000 (0:00:01.201) 0:00:11.322 ******* 2026-03-11 01:09:14.689327 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:09:14.689331 | orchestrator | 2026-03-11 01:09:14.689335 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-11 01:09:14.689339 | orchestrator | Wednesday 11 March 2026 01:07:25 +0000 (0:00:00.709) 0:00:12.032 ******* 2026-03-11 01:09:14.689342 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-11 01:09:14.689346 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-11 01:09:14.689350 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:09:14.689354 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:09:14.689358 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:09:14.689363 | orchestrator | 2026-03-11 01:09:14.689369 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-11 01:09:14.689376 | orchestrator | Wednesday 11 March 2026 01:07:25 +0000 (0:00:00.678) 0:00:12.711 ******* 2026-03-11 01:09:14.689385 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:09:14.689391 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:09:14.689397 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:09:14.689403 | orchestrator | 2026-03-11 01:09:14.689409 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-11 01:09:14.689414 | orchestrator | Wednesday 11 March 2026 01:07:26 +0000 (0:00:00.501) 0:00:13.213 ******* 2026-03-11 01:09:14.689422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088886, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.785235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088886, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.785235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088886, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.785235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088910, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7930758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088910, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7930758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088910, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7930758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088890, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7878015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088890, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7878015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088890, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7878015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088913, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7955952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088913, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7955952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088913, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7955952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088897, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7897115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088897, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7897115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088897, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7897115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088907, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7920175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088907, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7920175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088907, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7920175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088885, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7847908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088885, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7847908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088885, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7847908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088888, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7862349, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088888, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7862349, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088888, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7862349, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088891, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.78813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088891, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.78813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088891, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.78813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088900, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.790668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088900, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.790668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088900, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.790668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088909, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.792874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088909, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.792874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088909, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.792874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088889, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7872634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088889, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7872634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088889, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7872634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088905, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7913914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088905, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7913914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088905, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7913914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088899, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7902226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088899, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7902226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088899, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7902226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088895, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7891872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088895, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7891872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088895, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7891872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088894, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7887375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088894, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7887375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088894, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7887375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088903, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7913914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088903, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7913914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088903, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7913914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088892, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7882876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088892, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7882876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088892, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7882876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088908, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7925243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088908, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7925243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088908, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7925243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088947, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8202572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088947, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8202572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088947, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8202572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088926, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8032353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088926, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8032353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088926, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8032353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088921, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7976167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088921, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7976167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088921, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7976167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088930, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8064632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088930, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8064632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088930, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8064632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088917, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7960107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088917, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7960107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088917, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7960107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088940, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8145657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088940, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8145657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088940, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8145657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088931, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8112354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088931, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8112354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088931, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8112354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088941, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.815075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088941, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.815075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088941, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.815075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.689998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088945, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8192353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088945, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8192353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088945, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8192353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088939, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088939, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088939, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088928, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8056912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088928, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8056912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088928, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8056912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088925, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8012352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088925, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8012352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088925, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8012352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088927, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8042352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088927, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8042352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088927, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8042352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088923, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7982352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088923, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7982352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088923, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7982352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088929, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.805959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088929, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.805959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088929, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.805959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088944, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8182354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088944, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8182354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088944, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8182354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088943, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8172355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088943, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8172355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088943, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8172355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088919, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7961686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088919, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7961686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088919, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7961686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088920, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7965481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088920, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7965481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088920, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.7965481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088935, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088935, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088935, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088942, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8152354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088942, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8152354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088942, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773188334.8152354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:09:14.690261 | orchestrator | 2026-03-11 01:09:14.690266 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-11 01:09:14.690271 | orchestrator | Wednesday 11 March 2026 01:07:59 +0000 (0:00:33.765) 0:00:46.979 ******* 2026-03-11 01:09:14.690277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:09:14.690282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:09:14.690289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:09:14.690296 | orchestrator | 2026-03-11 01:09:14.690300 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-11 01:09:14.690305 | orchestrator | Wednesday 11 March 2026 01:08:00 +0000 (0:00:00.952) 0:00:47.931 ******* 2026-03-11 01:09:14.690309 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:09:14.690314 | orchestrator | 2026-03-11 01:09:14.690318 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-11 01:09:14.690323 | orchestrator | Wednesday 11 March 2026 01:08:02 +0000 (0:00:01.999) 0:00:49.930 ******* 2026-03-11 01:09:14.690327 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:09:14.690331 | orchestrator | 2026-03-11 01:09:14.690336 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-11 01:09:14.690340 | orchestrator | Wednesday 11 March 2026 01:08:04 +0000 (0:00:01.897) 0:00:51.828 ******* 2026-03-11 01:09:14.690345 | orchestrator | 2026-03-11 01:09:14.690349 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-11 01:09:14.690353 | orchestrator | Wednesday 11 March 2026 01:08:04 +0000 (0:00:00.068) 0:00:51.896 ******* 2026-03-11 01:09:14.690358 | orchestrator | 2026-03-11 01:09:14.690362 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-11 01:09:14.690366 | orchestrator | Wednesday 11 March 2026 01:08:05 +0000 (0:00:00.308) 0:00:52.205 ******* 2026-03-11 01:09:14.690371 | orchestrator | 2026-03-11 01:09:14.690375 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-11 01:09:14.690379 | orchestrator | Wednesday 11 March 2026 01:08:05 +0000 (0:00:00.073) 0:00:52.279 ******* 2026-03-11 01:09:14.690384 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:09:14.690388 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:09:14.690393 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:09:14.690397 | orchestrator | 2026-03-11 01:09:14.690402 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-11 01:09:14.690406 | orchestrator | Wednesday 11 March 2026 01:08:06 +0000 (0:00:01.689) 0:00:53.969 ******* 2026-03-11 01:09:14.690411 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:09:14.690415 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:09:14.690419 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-11 01:09:14.690424 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-11 01:09:14.690429 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-11 01:09:14.690433 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:09:14.690438 | orchestrator | 2026-03-11 01:09:14.690442 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-11 01:09:14.690447 | orchestrator | Wednesday 11 March 2026 01:08:46 +0000 (0:00:39.138) 0:01:33.107 ******* 2026-03-11 01:09:14.690451 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:09:14.690456 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:09:14.690460 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:09:14.690464 | orchestrator | 2026-03-11 01:09:14.690469 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-11 01:09:14.690474 | orchestrator | Wednesday 11 March 2026 01:09:07 +0000 (0:00:21.203) 0:01:54.311 ******* 2026-03-11 01:09:14.690478 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:09:14.690482 | orchestrator | 2026-03-11 01:09:14.690487 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-11 01:09:14.690494 | orchestrator | Wednesday 11 March 2026 01:09:09 +0000 (0:00:02.309) 0:01:56.620 ******* 2026-03-11 01:09:14.690501 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:09:14.690505 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:09:14.690509 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:09:14.690514 | orchestrator | 2026-03-11 01:09:14.690518 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-11 01:09:14.690523 | orchestrator | Wednesday 11 March 2026 01:09:10 +0000 (0:00:00.689) 0:01:57.309 ******* 2026-03-11 01:09:14.690528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-11 01:09:14.690533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-11 01:09:14.690538 | orchestrator | 2026-03-11 01:09:14.690542 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-11 01:09:14.690547 | orchestrator | Wednesday 11 March 2026 01:09:12 +0000 (0:00:02.485) 0:01:59.795 ******* 2026-03-11 01:09:14.690553 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:09:14.690558 | orchestrator | 2026-03-11 01:09:14.690562 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:09:14.690567 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:09:14.690572 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:09:14.690576 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:09:14.690581 | orchestrator | 2026-03-11 01:09:14.690585 | orchestrator | 2026-03-11 01:09:14.690590 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:09:14.690594 | orchestrator | Wednesday 11 March 2026 01:09:13 +0000 (0:00:00.270) 0:02:00.065 ******* 2026-03-11 01:09:14.690598 | orchestrator | =============================================================================== 2026-03-11 01:09:14.690603 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.14s 2026-03-11 01:09:14.690607 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 33.77s 2026-03-11 01:09:14.690612 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 21.20s 2026-03-11 01:09:14.690616 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.49s 2026-03-11 01:09:14.690621 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.31s 2026-03-11 01:09:14.690625 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.00s 2026-03-11 01:09:14.690630 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 1.90s 2026-03-11 01:09:14.690634 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.69s 2026-03-11 01:09:14.690638 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.39s 2026-03-11 01:09:14.690643 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.26s 2026-03-11 01:09:14.690647 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.20s 2026-03-11 01:09:14.690651 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.20s 2026-03-11 01:09:14.690659 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.14s 2026-03-11 01:09:14.690664 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.95s 2026-03-11 01:09:14.690668 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.75s 2026-03-11 01:09:14.690673 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.74s 2026-03-11 01:09:14.690677 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.71s 2026-03-11 01:09:14.690681 | orchestrator | grafana : Remove old grafana docker volume ------------------------------ 0.69s 2026-03-11 01:09:14.690686 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.68s 2026-03-11 01:09:14.690690 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.67s 2026-03-11 01:09:14.690695 | orchestrator | 2026-03-11 01:09:14 | INFO  | Task b815a853-ff78-46d6-891f-d0bd51282a6f is in state SUCCESS 2026-03-11 01:09:14.690699 | orchestrator | 2026-03-11 01:09:14 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:14.690704 | orchestrator | 2026-03-11 01:09:14 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:14.690745 | orchestrator | 2026-03-11 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:17.730152 | orchestrator | 2026-03-11 01:09:17 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:17.730818 | orchestrator | 2026-03-11 01:09:17 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:17.730838 | orchestrator | 2026-03-11 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:20.771671 | orchestrator | 2026-03-11 01:09:20 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:20.773129 | orchestrator | 2026-03-11 01:09:20 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:20.773174 | orchestrator | 2026-03-11 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:23.812528 | orchestrator | 2026-03-11 01:09:23 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:23.813002 | orchestrator | 2026-03-11 01:09:23 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:23.813132 | orchestrator | 2026-03-11 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:26.863918 | orchestrator | 2026-03-11 01:09:26 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:26.866495 | orchestrator | 2026-03-11 01:09:26 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:26.866576 | orchestrator | 2026-03-11 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:29.914739 | orchestrator | 2026-03-11 01:09:29 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:29.918396 | orchestrator | 2026-03-11 01:09:29 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:29.918582 | orchestrator | 2026-03-11 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:32.954585 | orchestrator | 2026-03-11 01:09:32 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:32.954805 | orchestrator | 2026-03-11 01:09:32 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:32.954827 | orchestrator | 2026-03-11 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:36.007893 | orchestrator | 2026-03-11 01:09:36 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:36.010238 | orchestrator | 2026-03-11 01:09:36 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:36.010286 | orchestrator | 2026-03-11 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:39.054350 | orchestrator | 2026-03-11 01:09:39 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:39.055236 | orchestrator | 2026-03-11 01:09:39 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:39.055286 | orchestrator | 2026-03-11 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:42.090219 | orchestrator | 2026-03-11 01:09:42 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:42.090976 | orchestrator | 2026-03-11 01:09:42 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:42.091015 | orchestrator | 2026-03-11 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:45.111360 | orchestrator | 2026-03-11 01:09:45 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:45.111417 | orchestrator | 2026-03-11 01:09:45 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:45.111428 | orchestrator | 2026-03-11 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:48.155220 | orchestrator | 2026-03-11 01:09:48 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:48.155277 | orchestrator | 2026-03-11 01:09:48 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:48.155289 | orchestrator | 2026-03-11 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:51.195336 | orchestrator | 2026-03-11 01:09:51 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:51.195936 | orchestrator | 2026-03-11 01:09:51 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:51.195961 | orchestrator | 2026-03-11 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:54.256127 | orchestrator | 2026-03-11 01:09:54 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:54.257990 | orchestrator | 2026-03-11 01:09:54 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:54.258055 | orchestrator | 2026-03-11 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:57.311049 | orchestrator | 2026-03-11 01:09:57 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:09:57.312181 | orchestrator | 2026-03-11 01:09:57 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:09:57.312306 | orchestrator | 2026-03-11 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:00.350339 | orchestrator | 2026-03-11 01:10:00 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:00.353125 | orchestrator | 2026-03-11 01:10:00 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:00.353325 | orchestrator | 2026-03-11 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:03.388443 | orchestrator | 2026-03-11 01:10:03 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:03.389937 | orchestrator | 2026-03-11 01:10:03 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:03.390008 | orchestrator | 2026-03-11 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:06.433991 | orchestrator | 2026-03-11 01:10:06 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:06.435321 | orchestrator | 2026-03-11 01:10:06 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:06.435373 | orchestrator | 2026-03-11 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:09.472278 | orchestrator | 2026-03-11 01:10:09 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:09.472812 | orchestrator | 2026-03-11 01:10:09 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:09.472912 | orchestrator | 2026-03-11 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:12.522208 | orchestrator | 2026-03-11 01:10:12 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:12.524646 | orchestrator | 2026-03-11 01:10:12 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:12.524760 | orchestrator | 2026-03-11 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:15.562682 | orchestrator | 2026-03-11 01:10:15 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:15.563511 | orchestrator | 2026-03-11 01:10:15 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:15.563537 | orchestrator | 2026-03-11 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:18.603487 | orchestrator | 2026-03-11 01:10:18 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:18.603973 | orchestrator | 2026-03-11 01:10:18 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:18.603997 | orchestrator | 2026-03-11 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:21.660847 | orchestrator | 2026-03-11 01:10:21 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:21.660992 | orchestrator | 2026-03-11 01:10:21 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:21.661008 | orchestrator | 2026-03-11 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:24.700108 | orchestrator | 2026-03-11 01:10:24 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:24.700441 | orchestrator | 2026-03-11 01:10:24 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:24.700475 | orchestrator | 2026-03-11 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:27.740441 | orchestrator | 2026-03-11 01:10:27 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:27.740791 | orchestrator | 2026-03-11 01:10:27 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:27.740811 | orchestrator | 2026-03-11 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:30.784295 | orchestrator | 2026-03-11 01:10:30 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:30.785856 | orchestrator | 2026-03-11 01:10:30 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:30.786141 | orchestrator | 2026-03-11 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:33.831910 | orchestrator | 2026-03-11 01:10:33 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:33.832803 | orchestrator | 2026-03-11 01:10:33 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:33.832852 | orchestrator | 2026-03-11 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:36.865465 | orchestrator | 2026-03-11 01:10:36 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:36.866728 | orchestrator | 2026-03-11 01:10:36 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:36.866772 | orchestrator | 2026-03-11 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:39.902224 | orchestrator | 2026-03-11 01:10:39 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:39.903236 | orchestrator | 2026-03-11 01:10:39 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:39.903279 | orchestrator | 2026-03-11 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:42.944666 | orchestrator | 2026-03-11 01:10:42 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:42.947044 | orchestrator | 2026-03-11 01:10:42 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:42.947148 | orchestrator | 2026-03-11 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:45.982942 | orchestrator | 2026-03-11 01:10:45 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:45.984352 | orchestrator | 2026-03-11 01:10:45 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:45.984400 | orchestrator | 2026-03-11 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:49.034166 | orchestrator | 2026-03-11 01:10:49 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:49.035328 | orchestrator | 2026-03-11 01:10:49 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:49.035418 | orchestrator | 2026-03-11 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:52.074781 | orchestrator | 2026-03-11 01:10:52 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:52.075359 | orchestrator | 2026-03-11 01:10:52 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:52.075659 | orchestrator | 2026-03-11 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:55.102622 | orchestrator | 2026-03-11 01:10:55 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:55.104105 | orchestrator | 2026-03-11 01:10:55 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:55.104152 | orchestrator | 2026-03-11 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:58.142446 | orchestrator | 2026-03-11 01:10:58 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:10:58.143925 | orchestrator | 2026-03-11 01:10:58 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:10:58.144133 | orchestrator | 2026-03-11 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:01.181640 | orchestrator | 2026-03-11 01:11:01 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:01.183264 | orchestrator | 2026-03-11 01:11:01 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:01.183355 | orchestrator | 2026-03-11 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:04.213839 | orchestrator | 2026-03-11 01:11:04 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:04.214147 | orchestrator | 2026-03-11 01:11:04 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:04.214165 | orchestrator | 2026-03-11 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:07.262271 | orchestrator | 2026-03-11 01:11:07 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:07.264612 | orchestrator | 2026-03-11 01:11:07 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:07.264665 | orchestrator | 2026-03-11 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:10.305954 | orchestrator | 2026-03-11 01:11:10 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:10.308247 | orchestrator | 2026-03-11 01:11:10 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:10.308298 | orchestrator | 2026-03-11 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:13.342575 | orchestrator | 2026-03-11 01:11:13 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:13.344462 | orchestrator | 2026-03-11 01:11:13 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:13.344528 | orchestrator | 2026-03-11 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:16.385925 | orchestrator | 2026-03-11 01:11:16 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:16.388412 | orchestrator | 2026-03-11 01:11:16 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:16.388459 | orchestrator | 2026-03-11 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:19.424885 | orchestrator | 2026-03-11 01:11:19 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:19.425805 | orchestrator | 2026-03-11 01:11:19 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:19.425840 | orchestrator | 2026-03-11 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:22.469137 | orchestrator | 2026-03-11 01:11:22 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:22.471528 | orchestrator | 2026-03-11 01:11:22 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:22.471596 | orchestrator | 2026-03-11 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:25.509888 | orchestrator | 2026-03-11 01:11:25 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:25.509938 | orchestrator | 2026-03-11 01:11:25 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:25.509946 | orchestrator | 2026-03-11 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:28.535976 | orchestrator | 2026-03-11 01:11:28 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:28.537972 | orchestrator | 2026-03-11 01:11:28 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:28.538063 | orchestrator | 2026-03-11 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:31.625123 | orchestrator | 2026-03-11 01:11:31 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:31.628201 | orchestrator | 2026-03-11 01:11:31 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:31.628249 | orchestrator | 2026-03-11 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:34.670601 | orchestrator | 2026-03-11 01:11:34 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:34.672228 | orchestrator | 2026-03-11 01:11:34 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:34.672268 | orchestrator | 2026-03-11 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:37.705288 | orchestrator | 2026-03-11 01:11:37 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:37.705591 | orchestrator | 2026-03-11 01:11:37 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:37.705644 | orchestrator | 2026-03-11 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:40.745044 | orchestrator | 2026-03-11 01:11:40 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:40.747074 | orchestrator | 2026-03-11 01:11:40 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:40.747150 | orchestrator | 2026-03-11 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:43.791041 | orchestrator | 2026-03-11 01:11:43 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:43.793027 | orchestrator | 2026-03-11 01:11:43 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:43.793089 | orchestrator | 2026-03-11 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:46.825604 | orchestrator | 2026-03-11 01:11:46 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:46.826589 | orchestrator | 2026-03-11 01:11:46 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:46.826620 | orchestrator | 2026-03-11 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:49.848397 | orchestrator | 2026-03-11 01:11:49 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:49.849857 | orchestrator | 2026-03-11 01:11:49 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:49.849904 | orchestrator | 2026-03-11 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:52.889783 | orchestrator | 2026-03-11 01:11:52 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:52.891803 | orchestrator | 2026-03-11 01:11:52 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:52.891867 | orchestrator | 2026-03-11 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:55.932152 | orchestrator | 2026-03-11 01:11:55 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:55.933828 | orchestrator | 2026-03-11 01:11:55 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:55.933905 | orchestrator | 2026-03-11 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:11:58.971943 | orchestrator | 2026-03-11 01:11:58 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:11:58.972239 | orchestrator | 2026-03-11 01:11:58 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:11:58.972255 | orchestrator | 2026-03-11 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:02.007586 | orchestrator | 2026-03-11 01:12:02 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:02.008805 | orchestrator | 2026-03-11 01:12:02 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:02.008868 | orchestrator | 2026-03-11 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:05.049293 | orchestrator | 2026-03-11 01:12:05 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:05.050845 | orchestrator | 2026-03-11 01:12:05 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:05.050924 | orchestrator | 2026-03-11 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:08.088120 | orchestrator | 2026-03-11 01:12:08 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:08.088177 | orchestrator | 2026-03-11 01:12:08 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:08.088183 | orchestrator | 2026-03-11 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:11.118139 | orchestrator | 2026-03-11 01:12:11 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:11.118191 | orchestrator | 2026-03-11 01:12:11 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:11.118198 | orchestrator | 2026-03-11 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:14.145966 | orchestrator | 2026-03-11 01:12:14 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:14.146215 | orchestrator | 2026-03-11 01:12:14 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:14.146232 | orchestrator | 2026-03-11 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:17.172100 | orchestrator | 2026-03-11 01:12:17 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:17.173689 | orchestrator | 2026-03-11 01:12:17 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:17.173733 | orchestrator | 2026-03-11 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:20.217437 | orchestrator | 2026-03-11 01:12:20 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:20.218186 | orchestrator | 2026-03-11 01:12:20 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:20.218207 | orchestrator | 2026-03-11 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:23.262591 | orchestrator | 2026-03-11 01:12:23 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:23.264075 | orchestrator | 2026-03-11 01:12:23 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:23.264121 | orchestrator | 2026-03-11 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:26.310722 | orchestrator | 2026-03-11 01:12:26 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:26.312308 | orchestrator | 2026-03-11 01:12:26 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:26.312435 | orchestrator | 2026-03-11 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:29.357732 | orchestrator | 2026-03-11 01:12:29 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:29.357775 | orchestrator | 2026-03-11 01:12:29 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:29.357805 | orchestrator | 2026-03-11 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:32.393654 | orchestrator | 2026-03-11 01:12:32 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:32.395363 | orchestrator | 2026-03-11 01:12:32 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:32.395482 | orchestrator | 2026-03-11 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:35.446756 | orchestrator | 2026-03-11 01:12:35 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:35.449810 | orchestrator | 2026-03-11 01:12:35 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:35.449865 | orchestrator | 2026-03-11 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:38.491995 | orchestrator | 2026-03-11 01:12:38 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:38.493040 | orchestrator | 2026-03-11 01:12:38 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:38.493074 | orchestrator | 2026-03-11 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:41.535664 | orchestrator | 2026-03-11 01:12:41 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:41.536643 | orchestrator | 2026-03-11 01:12:41 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:41.536681 | orchestrator | 2026-03-11 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:44.580845 | orchestrator | 2026-03-11 01:12:44 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:44.582908 | orchestrator | 2026-03-11 01:12:44 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:44.582960 | orchestrator | 2026-03-11 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:47.624652 | orchestrator | 2026-03-11 01:12:47 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:47.626158 | orchestrator | 2026-03-11 01:12:47 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:47.627079 | orchestrator | 2026-03-11 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:50.669738 | orchestrator | 2026-03-11 01:12:50 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state STARTED 2026-03-11 01:12:50.670249 | orchestrator | 2026-03-11 01:12:50 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:50.670470 | orchestrator | 2026-03-11 01:12:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:53.721832 | orchestrator | 2026-03-11 01:12:53 | INFO  | Task a5f3dfdf-ee46-4d2f-a536-5c39fb77cb75 is in state SUCCESS 2026-03-11 01:12:53.723068 | orchestrator | 2026-03-11 01:12:53.723126 | orchestrator | 2026-03-11 01:12:53.723135 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:12:53.723142 | orchestrator | 2026-03-11 01:12:53.723148 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-11 01:12:53.723155 | orchestrator | Wednesday 11 March 2026 01:04:49 +0000 (0:00:00.856) 0:00:00.856 ******* 2026-03-11 01:12:53.723161 | orchestrator | changed: [testbed-manager] 2026-03-11 01:12:53.723168 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.723174 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:53.723180 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:53.723186 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:53.723192 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:53.723198 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:53.723204 | orchestrator | 2026-03-11 01:12:53.723210 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:12:53.723216 | orchestrator | Wednesday 11 March 2026 01:04:50 +0000 (0:00:01.176) 0:00:02.032 ******* 2026-03-11 01:12:53.723222 | orchestrator | changed: [testbed-manager] 2026-03-11 01:12:53.723244 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.723251 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:53.723256 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:53.723263 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:53.723268 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:53.723274 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:53.723280 | orchestrator | 2026-03-11 01:12:53.723286 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:12:53.723292 | orchestrator | Wednesday 11 March 2026 01:04:51 +0000 (0:00:00.756) 0:00:02.789 ******* 2026-03-11 01:12:53.723311 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-11 01:12:53.723317 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-11 01:12:53.723323 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-11 01:12:53.723329 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-11 01:12:53.723335 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-11 01:12:53.723341 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-11 01:12:53.723347 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-11 01:12:53.723353 | orchestrator | 2026-03-11 01:12:53.723359 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-11 01:12:53.723365 | orchestrator | 2026-03-11 01:12:53.723371 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-11 01:12:53.723377 | orchestrator | Wednesday 11 March 2026 01:04:52 +0000 (0:00:01.141) 0:00:03.930 ******* 2026-03-11 01:12:53.723392 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:12:53.723399 | orchestrator | 2026-03-11 01:12:53.723405 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-11 01:12:53.723410 | orchestrator | Wednesday 11 March 2026 01:04:53 +0000 (0:00:01.235) 0:00:05.165 ******* 2026-03-11 01:12:53.723416 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-11 01:12:53.723451 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-11 01:12:53.723457 | orchestrator | 2026-03-11 01:12:53.723463 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-11 01:12:53.723469 | orchestrator | Wednesday 11 March 2026 01:04:57 +0000 (0:00:04.233) 0:00:09.399 ******* 2026-03-11 01:12:53.723489 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 01:12:53.723496 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 01:12:53.723503 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.723509 | orchestrator | 2026-03-11 01:12:53.723515 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-11 01:12:53.723553 | orchestrator | Wednesday 11 March 2026 01:05:01 +0000 (0:00:04.248) 0:00:13.648 ******* 2026-03-11 01:12:53.723560 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.723566 | orchestrator | 2026-03-11 01:12:53.723572 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-11 01:12:53.723578 | orchestrator | Wednesday 11 March 2026 01:05:03 +0000 (0:00:01.415) 0:00:15.064 ******* 2026-03-11 01:12:53.723584 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.723605 | orchestrator | 2026-03-11 01:12:53.723617 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-11 01:12:53.723623 | orchestrator | Wednesday 11 March 2026 01:05:04 +0000 (0:00:01.659) 0:00:16.724 ******* 2026-03-11 01:12:53.723628 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.723633 | orchestrator | 2026-03-11 01:12:53.723643 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-11 01:12:53.723648 | orchestrator | Wednesday 11 March 2026 01:05:07 +0000 (0:00:02.480) 0:00:19.204 ******* 2026-03-11 01:12:53.723653 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.723658 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.723664 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.723678 | orchestrator | 2026-03-11 01:12:53.723683 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-11 01:12:53.723689 | orchestrator | Wednesday 11 March 2026 01:05:08 +0000 (0:00:00.561) 0:00:19.766 ******* 2026-03-11 01:12:53.723694 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:53.723700 | orchestrator | 2026-03-11 01:12:53.723705 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-11 01:12:53.723711 | orchestrator | Wednesday 11 March 2026 01:05:39 +0000 (0:00:31.728) 0:00:51.495 ******* 2026-03-11 01:12:53.723716 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.723721 | orchestrator | 2026-03-11 01:12:53.723725 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-11 01:12:53.723731 | orchestrator | Wednesday 11 March 2026 01:05:54 +0000 (0:00:14.508) 0:01:06.003 ******* 2026-03-11 01:12:53.723738 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:53.723748 | orchestrator | 2026-03-11 01:12:53.723758 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-11 01:12:53.723768 | orchestrator | Wednesday 11 March 2026 01:06:07 +0000 (0:00:12.782) 0:01:18.785 ******* 2026-03-11 01:12:53.723791 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:53.723801 | orchestrator | 2026-03-11 01:12:53.723809 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-11 01:12:53.723818 | orchestrator | Wednesday 11 March 2026 01:06:08 +0000 (0:00:01.785) 0:01:20.570 ******* 2026-03-11 01:12:53.723828 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.723837 | orchestrator | 2026-03-11 01:12:53.723843 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-11 01:12:53.723848 | orchestrator | Wednesday 11 March 2026 01:06:09 +0000 (0:00:00.489) 0:01:21.060 ******* 2026-03-11 01:12:53.723854 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:12:53.723860 | orchestrator | 2026-03-11 01:12:53.723870 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-11 01:12:53.723878 | orchestrator | Wednesday 11 March 2026 01:06:09 +0000 (0:00:00.573) 0:01:21.633 ******* 2026-03-11 01:12:53.723884 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:53.723894 | orchestrator | 2026-03-11 01:12:53.723904 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-11 01:12:53.723914 | orchestrator | Wednesday 11 March 2026 01:06:28 +0000 (0:00:18.220) 0:01:39.853 ******* 2026-03-11 01:12:53.723921 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.723931 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.723941 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.723951 | orchestrator | 2026-03-11 01:12:53.723959 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-11 01:12:53.723969 | orchestrator | 2026-03-11 01:12:53.723977 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-11 01:12:53.723987 | orchestrator | Wednesday 11 March 2026 01:06:28 +0000 (0:00:00.392) 0:01:40.246 ******* 2026-03-11 01:12:53.723996 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:12:53.724006 | orchestrator | 2026-03-11 01:12:53.724014 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-11 01:12:53.724024 | orchestrator | Wednesday 11 March 2026 01:06:29 +0000 (0:00:00.599) 0:01:40.845 ******* 2026-03-11 01:12:53.724034 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.724041 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.724051 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.724057 | orchestrator | 2026-03-11 01:12:53.724066 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-11 01:12:53.724076 | orchestrator | Wednesday 11 March 2026 01:06:30 +0000 (0:00:01.798) 0:01:42.644 ******* 2026-03-11 01:12:53.724084 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.724094 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.724110 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.724116 | orchestrator | 2026-03-11 01:12:53.724121 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-11 01:12:53.724127 | orchestrator | Wednesday 11 March 2026 01:06:33 +0000 (0:00:02.181) 0:01:44.825 ******* 2026-03-11 01:12:53.724133 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.724139 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.724144 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.724150 | orchestrator | 2026-03-11 01:12:53.724154 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-11 01:12:53.724158 | orchestrator | Wednesday 11 March 2026 01:06:33 +0000 (0:00:00.350) 0:01:45.176 ******* 2026-03-11 01:12:53.724163 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-11 01:12:53.724167 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.724172 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-11 01:12:53.724176 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.724183 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-11 01:12:53.724189 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-11 01:12:53.724446 | orchestrator | 2026-03-11 01:12:53.724458 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-11 01:12:53.724464 | orchestrator | Wednesday 11 March 2026 01:06:40 +0000 (0:00:06.786) 0:01:51.962 ******* 2026-03-11 01:12:53.724470 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.724476 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.724482 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.724487 | orchestrator | 2026-03-11 01:12:53.724493 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-11 01:12:53.724500 | orchestrator | Wednesday 11 March 2026 01:06:40 +0000 (0:00:00.397) 0:01:52.359 ******* 2026-03-11 01:12:53.724506 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-11 01:12:53.724513 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.724563 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-11 01:12:53.724574 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.724581 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-11 01:12:53.724588 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.724594 | orchestrator | 2026-03-11 01:12:53.724601 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-11 01:12:53.724607 | orchestrator | Wednesday 11 March 2026 01:06:41 +0000 (0:00:00.567) 0:01:52.926 ******* 2026-03-11 01:12:53.724613 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.724619 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.724625 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.724630 | orchestrator | 2026-03-11 01:12:53.724636 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-11 01:12:53.724643 | orchestrator | Wednesday 11 March 2026 01:06:41 +0000 (0:00:00.781) 0:01:53.708 ******* 2026-03-11 01:12:53.724649 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.724655 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.724661 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.724667 | orchestrator | 2026-03-11 01:12:53.724673 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-11 01:12:53.724679 | orchestrator | Wednesday 11 March 2026 01:06:42 +0000 (0:00:00.858) 0:01:54.566 ******* 2026-03-11 01:12:53.724685 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.724691 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.724742 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.724751 | orchestrator | 2026-03-11 01:12:53.724757 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-11 01:12:53.724763 | orchestrator | Wednesday 11 March 2026 01:06:44 +0000 (0:00:02.027) 0:01:56.593 ******* 2026-03-11 01:12:53.724769 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.724775 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.724790 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:53.724797 | orchestrator | 2026-03-11 01:12:53.724803 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-11 01:12:53.724875 | orchestrator | Wednesday 11 March 2026 01:07:06 +0000 (0:00:21.229) 0:02:17.823 ******* 2026-03-11 01:12:53.724890 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.724896 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.724902 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:53.724909 | orchestrator | 2026-03-11 01:12:53.724915 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-11 01:12:53.724922 | orchestrator | Wednesday 11 March 2026 01:07:20 +0000 (0:00:14.274) 0:02:32.098 ******* 2026-03-11 01:12:53.724927 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:53.724933 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.724940 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.724946 | orchestrator | 2026-03-11 01:12:53.724952 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-11 01:12:53.724959 | orchestrator | Wednesday 11 March 2026 01:07:21 +0000 (0:00:00.783) 0:02:32.881 ******* 2026-03-11 01:12:53.724965 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.724970 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.724977 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.724983 | orchestrator | 2026-03-11 01:12:53.724988 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-11 01:12:53.724995 | orchestrator | Wednesday 11 March 2026 01:07:34 +0000 (0:00:12.985) 0:02:45.867 ******* 2026-03-11 01:12:53.725001 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.725006 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.725012 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.725019 | orchestrator | 2026-03-11 01:12:53.725025 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-11 01:12:53.725031 | orchestrator | Wednesday 11 March 2026 01:07:35 +0000 (0:00:01.062) 0:02:46.930 ******* 2026-03-11 01:12:53.725038 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.725044 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.725050 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.725056 | orchestrator | 2026-03-11 01:12:53.725063 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-11 01:12:53.725068 | orchestrator | 2026-03-11 01:12:53.725077 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-11 01:12:53.725084 | orchestrator | Wednesday 11 March 2026 01:07:35 +0000 (0:00:00.523) 0:02:47.453 ******* 2026-03-11 01:12:53.725090 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:12:53.725097 | orchestrator | 2026-03-11 01:12:53.725102 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-11 01:12:53.725108 | orchestrator | Wednesday 11 March 2026 01:07:36 +0000 (0:00:00.554) 0:02:48.008 ******* 2026-03-11 01:12:53.725114 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-11 01:12:53.725120 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-11 01:12:53.725126 | orchestrator | 2026-03-11 01:12:53.725132 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-11 01:12:53.725139 | orchestrator | Wednesday 11 March 2026 01:07:40 +0000 (0:00:04.073) 0:02:52.082 ******* 2026-03-11 01:12:53.725145 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-11 01:12:53.725152 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-11 01:12:53.725158 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-11 01:12:53.725164 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-11 01:12:53.725176 | orchestrator | 2026-03-11 01:12:53.725182 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-11 01:12:53.725188 | orchestrator | Wednesday 11 March 2026 01:07:46 +0000 (0:00:06.218) 0:02:58.300 ******* 2026-03-11 01:12:53.725194 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:12:53.725200 | orchestrator | 2026-03-11 01:12:53.725206 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-11 01:12:53.725213 | orchestrator | Wednesday 11 March 2026 01:07:49 +0000 (0:00:03.073) 0:03:01.374 ******* 2026-03-11 01:12:53.725219 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:12:53.725225 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-11 01:12:53.725232 | orchestrator | 2026-03-11 01:12:53.725238 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-11 01:12:53.725245 | orchestrator | Wednesday 11 March 2026 01:07:53 +0000 (0:00:03.807) 0:03:05.181 ******* 2026-03-11 01:12:53.725251 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:12:53.725258 | orchestrator | 2026-03-11 01:12:53.725264 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-11 01:12:53.725271 | orchestrator | Wednesday 11 March 2026 01:07:56 +0000 (0:00:03.101) 0:03:08.283 ******* 2026-03-11 01:12:53.725277 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-11 01:12:53.725283 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-11 01:12:53.725289 | orchestrator | 2026-03-11 01:12:53.725308 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-11 01:12:53.725321 | orchestrator | Wednesday 11 March 2026 01:08:03 +0000 (0:00:06.758) 0:03:15.042 ******* 2026-03-11 01:12:53.725330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:53.725343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:53.725355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:53.725368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.725375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.725381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.725387 | orchestrator | 2026-03-11 01:12:53.725393 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-11 01:12:53.725399 | orchestrator | Wednesday 11 March 2026 01:08:04 +0000 (0:00:01.171) 0:03:16.214 ******* 2026-03-11 01:12:53.725405 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.725411 | orchestrator | 2026-03-11 01:12:53.725417 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-11 01:12:53.725426 | orchestrator | Wednesday 11 March 2026 01:08:04 +0000 (0:00:00.142) 0:03:16.357 ******* 2026-03-11 01:12:53.725432 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.725443 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.725449 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.725455 | orchestrator | 2026-03-11 01:12:53.725462 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-11 01:12:53.725468 | orchestrator | Wednesday 11 March 2026 01:08:04 +0000 (0:00:00.298) 0:03:16.656 ******* 2026-03-11 01:12:53.725474 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:12:53.725481 | orchestrator | 2026-03-11 01:12:53.725487 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-11 01:12:53.725493 | orchestrator | Wednesday 11 March 2026 01:08:05 +0000 (0:00:00.918) 0:03:17.574 ******* 2026-03-11 01:12:53.725499 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.725505 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.725511 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.725516 | orchestrator | 2026-03-11 01:12:53.725819 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-11 01:12:53.725827 | orchestrator | Wednesday 11 March 2026 01:08:06 +0000 (0:00:00.294) 0:03:17.869 ******* 2026-03-11 01:12:53.725834 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:12:53.725840 | orchestrator | 2026-03-11 01:12:53.725845 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-11 01:12:53.725850 | orchestrator | Wednesday 11 March 2026 01:08:06 +0000 (0:00:00.544) 0:03:18.413 ******* 2026-03-11 01:12:53.725865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:53.725873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:53.725892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:53.725900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.725907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.725919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.725926 | orchestrator | 2026-03-11 01:12:53.725932 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-11 01:12:53.725938 | orchestrator | Wednesday 11 March 2026 01:08:09 +0000 (0:00:02.977) 0:03:21.390 ******* 2026-03-11 01:12:53.725945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:53.725959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.725966 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.725973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:53.725984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.725990 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.725997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:53.726011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.726060 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.726066 | orchestrator | 2026-03-11 01:12:53.726072 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-11 01:12:53.726078 | orchestrator | Wednesday 11 March 2026 01:08:10 +0000 (0:00:00.575) 0:03:21.966 ******* 2026-03-11 01:12:53.726084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:53.726092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.726098 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.726115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:53.726129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.726142 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.726149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:53.726156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.726163 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.726169 | orchestrator | 2026-03-11 01:12:53.726175 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-11 01:12:53.726182 | orchestrator | Wednesday 11 March 2026 01:08:11 +0000 (0:00:00.774) 0:03:22.741 ******* 2026-03-11 01:12:53.726221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:53.726239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:53.726247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:53.726267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.726275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.726286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.726292 | orchestrator | 2026-03-11 01:12:53.726318 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-11 01:12:53.726324 | orchestrator | Wednesday 11 March 2026 01:08:13 +0000 (0:00:02.508) 0:03:25.249 ******* 2026-03-11 01:12:53.726332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:53.726339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:53.726351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:53.726363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.726372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.726378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.726384 | orchestrator | 2026-03-11 01:12:53.726390 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-11 01:12:53.726396 | orchestrator | Wednesday 11 March 2026 01:08:18 +0000 (0:00:05.312) 0:03:30.562 ******* 2026-03-11 01:12:53.726408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:53.726418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.726424 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.726433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:53.726440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.726446 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.726452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:53.726467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.726474 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.726480 | orchestrator | 2026-03-11 01:12:53.726486 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-11 01:12:53.726601 | orchestrator | Wednesday 11 March 2026 01:08:19 +0000 (0:00:00.572) 0:03:31.134 ******* 2026-03-11 01:12:53.726611 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:53.726619 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.726625 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:53.726631 | orchestrator | 2026-03-11 01:12:53.726638 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-11 01:12:53.726644 | orchestrator | Wednesday 11 March 2026 01:08:20 +0000 (0:00:01.424) 0:03:32.559 ******* 2026-03-11 01:12:53.726650 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.726655 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.726661 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.726668 | orchestrator | 2026-03-11 01:12:53.726674 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-11 01:12:53.726680 | orchestrator | Wednesday 11 March 2026 01:08:21 +0000 (0:00:00.309) 0:03:32.868 ******* 2026-03-11 01:12:53.726691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:53.726699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:53.726718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:53.726726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.726735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.726741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.726747 | orchestrator | 2026-03-11 01:12:53.726753 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-11 01:12:53.726760 | orchestrator | Wednesday 11 March 2026 01:08:23 +0000 (0:00:02.242) 0:03:35.111 ******* 2026-03-11 01:12:53.726765 | orchestrator | 2026-03-11 01:12:53.726771 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-11 01:12:53.726777 | orchestrator | Wednesday 11 March 2026 01:08:23 +0000 (0:00:00.134) 0:03:35.246 ******* 2026-03-11 01:12:53.726787 | orchestrator | 2026-03-11 01:12:53.726793 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-11 01:12:53.726799 | orchestrator | Wednesday 11 March 2026 01:08:23 +0000 (0:00:00.124) 0:03:35.371 ******* 2026-03-11 01:12:53.726805 | orchestrator | 2026-03-11 01:12:53.726811 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-11 01:12:53.726817 | orchestrator | Wednesday 11 March 2026 01:08:23 +0000 (0:00:00.129) 0:03:35.500 ******* 2026-03-11 01:12:53.726824 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.726830 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:53.726836 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:53.726841 | orchestrator | 2026-03-11 01:12:53.726846 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-11 01:12:53.726851 | orchestrator | Wednesday 11 March 2026 01:08:43 +0000 (0:00:19.999) 0:03:55.500 ******* 2026-03-11 01:12:53.726856 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.726862 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:53.726868 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:53.726874 | orchestrator | 2026-03-11 01:12:53.726880 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-11 01:12:53.726886 | orchestrator | 2026-03-11 01:12:53.726895 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-11 01:12:53.726906 | orchestrator | Wednesday 11 March 2026 01:08:50 +0000 (0:00:06.545) 0:04:02.045 ******* 2026-03-11 01:12:53.726918 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:12:53.726925 | orchestrator | 2026-03-11 01:12:53.726935 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-11 01:12:53.726941 | orchestrator | Wednesday 11 March 2026 01:08:51 +0000 (0:00:01.627) 0:04:03.672 ******* 2026-03-11 01:12:53.726947 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.726953 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.726958 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.726963 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.726969 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.726975 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.726981 | orchestrator | 2026-03-11 01:12:53.726988 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-11 01:12:53.726993 | orchestrator | Wednesday 11 March 2026 01:08:52 +0000 (0:00:00.656) 0:04:04.328 ******* 2026-03-11 01:12:53.727000 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.727006 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.727012 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.727018 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 01:12:53.727024 | orchestrator | 2026-03-11 01:12:53.727030 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-11 01:12:53.727036 | orchestrator | Wednesday 11 March 2026 01:08:53 +0000 (0:00:01.313) 0:04:05.642 ******* 2026-03-11 01:12:53.727043 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-11 01:12:53.727049 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-11 01:12:53.727056 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-11 01:12:53.727062 | orchestrator | 2026-03-11 01:12:53.727068 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-11 01:12:53.727074 | orchestrator | Wednesday 11 March 2026 01:08:54 +0000 (0:00:00.621) 0:04:06.264 ******* 2026-03-11 01:12:53.727080 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-11 01:12:53.727086 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-11 01:12:53.727091 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-11 01:12:53.727097 | orchestrator | 2026-03-11 01:12:53.727103 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-11 01:12:53.727114 | orchestrator | Wednesday 11 March 2026 01:08:55 +0000 (0:00:01.328) 0:04:07.593 ******* 2026-03-11 01:12:53.727121 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-11 01:12:53.727126 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.727132 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-11 01:12:53.727138 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.727143 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-11 01:12:53.727155 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.727161 | orchestrator | 2026-03-11 01:12:53.727168 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-11 01:12:53.727174 | orchestrator | Wednesday 11 March 2026 01:08:56 +0000 (0:00:00.543) 0:04:08.137 ******* 2026-03-11 01:12:53.727180 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 01:12:53.727186 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 01:12:53.727193 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.727199 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 01:12:53.727205 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 01:12:53.727211 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.727217 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-11 01:12:53.727223 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 01:12:53.727229 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-11 01:12:53.727235 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 01:12:53.727242 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.727248 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-11 01:12:53.727254 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-11 01:12:53.727260 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-11 01:12:53.727265 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-11 01:12:53.727271 | orchestrator | 2026-03-11 01:12:53.727277 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-11 01:12:53.727283 | orchestrator | Wednesday 11 March 2026 01:08:57 +0000 (0:00:01.247) 0:04:09.384 ******* 2026-03-11 01:12:53.727290 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.727308 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.727314 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.727319 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:53.727324 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:53.727329 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:53.727334 | orchestrator | 2026-03-11 01:12:53.727339 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-11 01:12:53.727344 | orchestrator | Wednesday 11 March 2026 01:08:58 +0000 (0:00:00.986) 0:04:10.371 ******* 2026-03-11 01:12:53.727349 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.727354 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.727359 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.727364 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:53.727370 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:53.727375 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:53.727380 | orchestrator | 2026-03-11 01:12:53.727386 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-11 01:12:53.727392 | orchestrator | Wednesday 11 March 2026 01:09:00 +0000 (0:00:01.691) 0:04:12.062 ******* 2026-03-11 01:12:53.727407 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727420 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727436 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727443 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727481 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727494 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727505 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727609 | orchestrator | 2026-03-11 01:12:53.727616 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-11 01:12:53.727622 | orchestrator | Wednesday 11 March 2026 01:09:02 +0000 (0:00:02.150) 0:04:14.212 ******* 2026-03-11 01:12:53.727628 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:12:53.727635 | orchestrator | 2026-03-11 01:12:53.727641 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-11 01:12:53.727646 | orchestrator | Wednesday 11 March 2026 01:09:03 +0000 (0:00:01.254) 0:04:15.466 ******* 2026-03-11 01:12:53.727653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727672 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727694 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727707 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727722 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727744 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727757 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727778 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.727785 | orchestrator | 2026-03-11 01:12:53.727791 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-11 01:12:53.727797 | orchestrator | Wednesday 11 March 2026 01:09:07 +0000 (0:00:03.282) 0:04:18.749 ******* 2026-03-11 01:12:53.727806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:53.727812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:53.727819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:53.727828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:53.727838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.727844 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.727850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.727855 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.727864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:53.727871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:53.727883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.727889 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.727899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:53.727907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.727913 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.727919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:53.727928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.727934 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.727941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:53.727951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.727957 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.727963 | orchestrator | 2026-03-11 01:12:53.727969 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-11 01:12:53.727975 | orchestrator | Wednesday 11 March 2026 01:09:08 +0000 (0:00:01.856) 0:04:20.606 ******* 2026-03-11 01:12:53.728360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:53.728380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:53.728391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.728398 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.728404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:53.728416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:53.728445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.728459 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.728468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:53.728475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:53.728484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.728498 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.728505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:53.728511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.728517 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.728539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:53.728546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.728552 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.728558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:53.728568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.728578 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.728584 | orchestrator | 2026-03-11 01:12:53.728590 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-11 01:12:53.728597 | orchestrator | Wednesday 11 March 2026 01:09:11 +0000 (0:00:02.206) 0:04:22.812 ******* 2026-03-11 01:12:53.728603 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.728609 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.728615 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.728621 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 01:12:53.728628 | orchestrator | 2026-03-11 01:12:53.728634 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-11 01:12:53.728640 | orchestrator | Wednesday 11 March 2026 01:09:12 +0000 (0:00:01.064) 0:04:23.876 ******* 2026-03-11 01:12:53.728646 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-11 01:12:53.728652 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-11 01:12:53.728658 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-11 01:12:53.728664 | orchestrator | 2026-03-11 01:12:53.728670 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-11 01:12:53.728675 | orchestrator | Wednesday 11 March 2026 01:09:13 +0000 (0:00:00.951) 0:04:24.828 ******* 2026-03-11 01:12:53.728681 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-11 01:12:53.728687 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-11 01:12:53.728693 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-11 01:12:53.728698 | orchestrator | 2026-03-11 01:12:53.728704 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-11 01:12:53.728710 | orchestrator | Wednesday 11 March 2026 01:09:14 +0000 (0:00:00.907) 0:04:25.736 ******* 2026-03-11 01:12:53.728716 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:12:53.728722 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:12:53.728728 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:12:53.728733 | orchestrator | 2026-03-11 01:12:53.728740 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-11 01:12:53.728746 | orchestrator | Wednesday 11 March 2026 01:09:14 +0000 (0:00:00.512) 0:04:26.248 ******* 2026-03-11 01:12:53.728752 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:12:53.728757 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:12:53.728763 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:12:53.728769 | orchestrator | 2026-03-11 01:12:53.728775 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-11 01:12:53.728781 | orchestrator | Wednesday 11 March 2026 01:09:15 +0000 (0:00:00.785) 0:04:27.034 ******* 2026-03-11 01:12:53.728787 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-11 01:12:53.728793 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-11 01:12:53.728799 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-11 01:12:53.728804 | orchestrator | 2026-03-11 01:12:53.728811 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-11 01:12:53.728834 | orchestrator | Wednesday 11 March 2026 01:09:16 +0000 (0:00:01.134) 0:04:28.168 ******* 2026-03-11 01:12:53.728841 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-11 01:12:53.728846 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-11 01:12:53.728851 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-11 01:12:53.728856 | orchestrator | 2026-03-11 01:12:53.728862 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-11 01:12:53.728868 | orchestrator | Wednesday 11 March 2026 01:09:17 +0000 (0:00:01.208) 0:04:29.377 ******* 2026-03-11 01:12:53.728873 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-11 01:12:53.728884 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-11 01:12:53.728890 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-11 01:12:53.728896 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-11 01:12:53.728902 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-11 01:12:53.728908 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-11 01:12:53.728914 | orchestrator | 2026-03-11 01:12:53.728919 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-11 01:12:53.728925 | orchestrator | Wednesday 11 March 2026 01:09:21 +0000 (0:00:03.715) 0:04:33.092 ******* 2026-03-11 01:12:53.728931 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.728937 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.728943 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.728949 | orchestrator | 2026-03-11 01:12:53.728956 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-11 01:12:53.728962 | orchestrator | Wednesday 11 March 2026 01:09:21 +0000 (0:00:00.512) 0:04:33.605 ******* 2026-03-11 01:12:53.728968 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.728974 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.728980 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.728986 | orchestrator | 2026-03-11 01:12:53.728992 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-11 01:12:53.728998 | orchestrator | Wednesday 11 March 2026 01:09:22 +0000 (0:00:00.329) 0:04:33.935 ******* 2026-03-11 01:12:53.729004 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:53.729010 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:53.729016 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:53.729022 | orchestrator | 2026-03-11 01:12:53.729028 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-11 01:12:53.729034 | orchestrator | Wednesday 11 March 2026 01:09:23 +0000 (0:00:01.129) 0:04:35.064 ******* 2026-03-11 01:12:53.729044 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-11 01:12:53.729051 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-11 01:12:53.729057 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-11 01:12:53.729063 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-11 01:12:53.729069 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-11 01:12:53.729075 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-11 01:12:53.729081 | orchestrator | 2026-03-11 01:12:53.729087 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-11 01:12:53.729093 | orchestrator | Wednesday 11 March 2026 01:09:26 +0000 (0:00:03.153) 0:04:38.218 ******* 2026-03-11 01:12:53.729099 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-11 01:12:53.729105 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-11 01:12:53.729111 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-11 01:12:53.729117 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-11 01:12:53.729123 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:53.729130 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-11 01:12:53.729136 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:53.729142 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-11 01:12:53.729148 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:53.729158 | orchestrator | 2026-03-11 01:12:53.729164 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-11 01:12:53.729170 | orchestrator | Wednesday 11 March 2026 01:09:30 +0000 (0:00:03.619) 0:04:41.837 ******* 2026-03-11 01:12:53.729175 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.729182 | orchestrator | 2026-03-11 01:12:53.729188 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-11 01:12:53.729194 | orchestrator | Wednesday 11 March 2026 01:09:30 +0000 (0:00:00.143) 0:04:41.981 ******* 2026-03-11 01:12:53.729200 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.729206 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.729212 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.729218 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.729224 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.729230 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.729236 | orchestrator | 2026-03-11 01:12:53.729242 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-11 01:12:53.729248 | orchestrator | Wednesday 11 March 2026 01:09:30 +0000 (0:00:00.590) 0:04:42.572 ******* 2026-03-11 01:12:53.729254 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-11 01:12:53.729260 | orchestrator | 2026-03-11 01:12:53.729266 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-11 01:12:53.729292 | orchestrator | Wednesday 11 March 2026 01:09:31 +0000 (0:00:00.698) 0:04:43.271 ******* 2026-03-11 01:12:53.729334 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.729339 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.729344 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.729348 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.729353 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.729358 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.729362 | orchestrator | 2026-03-11 01:12:53.729367 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-11 01:12:53.729371 | orchestrator | Wednesday 11 March 2026 01:09:32 +0000 (0:00:00.825) 0:04:44.097 ******* 2026-03-11 01:12:53.729377 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729386 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729421 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729429 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729434 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729468 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729476 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729491 | orchestrator | 2026-03-11 01:12:53.729496 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-11 01:12:53.729501 | orchestrator | Wednesday 11 March 2026 01:09:35 +0000 (0:00:03.386) 0:04:47.483 ******* 2026-03-11 01:12:53.729506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:53.729515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:53.729521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:53.729526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:53.729539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:53.729545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:53.729551 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729561 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.729621 | orchestrator | 2026-03-11 01:12:53.729627 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-11 01:12:53.729634 | orchestrator | Wednesday 11 March 2026 01:09:42 +0000 (0:00:06.372) 0:04:53.855 ******* 2026-03-11 01:12:53.729640 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.729646 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.729656 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.729662 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.729668 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.729674 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.729680 | orchestrator | 2026-03-11 01:12:53.729686 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-11 01:12:53.729692 | orchestrator | Wednesday 11 March 2026 01:09:43 +0000 (0:00:01.202) 0:04:55.057 ******* 2026-03-11 01:12:53.729698 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-11 01:12:53.729705 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-11 01:12:53.729714 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-11 01:12:53.729721 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-11 01:12:53.729726 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-11 01:12:53.729732 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.729739 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-11 01:12:53.729745 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.729751 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-11 01:12:53.729756 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-11 01:12:53.729763 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.729768 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-11 01:12:53.729774 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-11 01:12:53.729779 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-11 01:12:53.729784 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-11 01:12:53.729790 | orchestrator | 2026-03-11 01:12:53.729796 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-11 01:12:53.729803 | orchestrator | Wednesday 11 March 2026 01:09:46 +0000 (0:00:03.292) 0:04:58.350 ******* 2026-03-11 01:12:53.729808 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.729814 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.729820 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.729826 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.729832 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.729838 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.729843 | orchestrator | 2026-03-11 01:12:53.729848 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-11 01:12:53.729853 | orchestrator | Wednesday 11 March 2026 01:09:47 +0000 (0:00:00.518) 0:04:58.869 ******* 2026-03-11 01:12:53.729858 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-11 01:12:53.729864 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-11 01:12:53.729870 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-11 01:12:53.729876 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-11 01:12:53.729881 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-11 01:12:53.729891 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-11 01:12:53.729897 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-11 01:12:53.729908 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-11 01:12:53.729914 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-11 01:12:53.729920 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.729926 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-11 01:12:53.729931 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-11 01:12:53.729937 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.729943 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-11 01:12:53.729949 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.729955 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-11 01:12:53.729960 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-11 01:12:53.729967 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-11 01:12:53.729973 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-11 01:12:53.729979 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-11 01:12:53.729985 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-11 01:12:53.729991 | orchestrator | 2026-03-11 01:12:53.729997 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-11 01:12:53.730003 | orchestrator | Wednesday 11 March 2026 01:09:51 +0000 (0:00:04.688) 0:05:03.558 ******* 2026-03-11 01:12:53.730051 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-11 01:12:53.730060 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-11 01:12:53.730066 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-11 01:12:53.730072 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-11 01:12:53.730078 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-11 01:12:53.730085 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-11 01:12:53.730090 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-11 01:12:53.730096 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-11 01:12:53.730103 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-11 01:12:53.730109 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-11 01:12:53.730115 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-11 01:12:53.730121 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-11 01:12:53.730127 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.730132 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-11 01:12:53.730139 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-11 01:12:53.730145 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-11 01:12:53.730155 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.730161 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-11 01:12:53.730167 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.730173 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-11 01:12:53.730180 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-11 01:12:53.730186 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-11 01:12:53.730192 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-11 01:12:53.730199 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-11 01:12:53.730205 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-11 01:12:53.730211 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-11 01:12:53.730217 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-11 01:12:53.730223 | orchestrator | 2026-03-11 01:12:53.730234 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-11 01:12:53.730240 | orchestrator | Wednesday 11 March 2026 01:09:59 +0000 (0:00:07.424) 0:05:10.983 ******* 2026-03-11 01:12:53.730246 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.730252 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.730259 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.730264 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.730271 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.730277 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.730283 | orchestrator | 2026-03-11 01:12:53.730289 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-11 01:12:53.730309 | orchestrator | Wednesday 11 March 2026 01:10:00 +0000 (0:00:00.792) 0:05:11.775 ******* 2026-03-11 01:12:53.730315 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.730321 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.730327 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.730332 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.730337 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.730342 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.730348 | orchestrator | 2026-03-11 01:12:53.730354 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-11 01:12:53.730360 | orchestrator | Wednesday 11 March 2026 01:10:00 +0000 (0:00:00.603) 0:05:12.378 ******* 2026-03-11 01:12:53.730366 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.730371 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:53.730377 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.730383 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.730389 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:53.730395 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:53.730401 | orchestrator | 2026-03-11 01:12:53.730406 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-11 01:12:53.730411 | orchestrator | Wednesday 11 March 2026 01:10:02 +0000 (0:00:01.903) 0:05:14.282 ******* 2026-03-11 01:12:53.730420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:53.730432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:53.730439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.730445 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.730457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:53.730464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:53.730474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.730488 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.730495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:53.730501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:53.730511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.730518 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.730525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:53.730531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.730538 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.730547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:53.730557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.730564 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.730569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:53.730574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:53.730579 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.730584 | orchestrator | 2026-03-11 01:12:53.730590 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-11 01:12:53.730596 | orchestrator | Wednesday 11 March 2026 01:10:03 +0000 (0:00:01.374) 0:05:15.656 ******* 2026-03-11 01:12:53.730602 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-11 01:12:53.730611 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-11 01:12:53.730617 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.730623 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-11 01:12:53.730629 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-11 01:12:53.730635 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.730641 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-11 01:12:53.730647 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-11 01:12:53.730653 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.730658 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-11 01:12:53.730664 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-11 01:12:53.730670 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.730675 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-11 01:12:53.730681 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-11 01:12:53.730687 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.730693 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-11 01:12:53.730699 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-11 01:12:53.730709 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.730715 | orchestrator | 2026-03-11 01:12:53.730721 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-11 01:12:53.730727 | orchestrator | Wednesday 11 March 2026 01:10:04 +0000 (0:00:00.835) 0:05:16.492 ******* 2026-03-11 01:12:53.730736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:53.730743 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:53.730750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:53.730761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:53.730767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:53.730778 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:53.730788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:53.730794 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:53.730800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:53.730806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.730817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.730824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.730837 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.730843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.730848 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:53.730853 | orchestrator | 2026-03-11 01:12:53.730859 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-11 01:12:53.730865 | orchestrator | Wednesday 11 March 2026 01:10:07 +0000 (0:00:02.974) 0:05:19.466 ******* 2026-03-11 01:12:53.730871 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.730877 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.730883 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.730889 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.730894 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.730900 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.730906 | orchestrator | 2026-03-11 01:12:53.730912 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-11 01:12:53.730918 | orchestrator | Wednesday 11 March 2026 01:10:08 +0000 (0:00:00.759) 0:05:20.225 ******* 2026-03-11 01:12:53.730923 | orchestrator | 2026-03-11 01:12:53.730929 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-11 01:12:53.730941 | orchestrator | Wednesday 11 March 2026 01:10:08 +0000 (0:00:00.128) 0:05:20.354 ******* 2026-03-11 01:12:53.730948 | orchestrator | 2026-03-11 01:12:53.730957 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-11 01:12:53.730964 | orchestrator | Wednesday 11 March 2026 01:10:08 +0000 (0:00:00.130) 0:05:20.484 ******* 2026-03-11 01:12:53.730970 | orchestrator | 2026-03-11 01:12:53.730976 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-11 01:12:53.730982 | orchestrator | Wednesday 11 March 2026 01:10:08 +0000 (0:00:00.127) 0:05:20.612 ******* 2026-03-11 01:12:53.730988 | orchestrator | 2026-03-11 01:12:53.730994 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-11 01:12:53.731000 | orchestrator | Wednesday 11 March 2026 01:10:09 +0000 (0:00:00.128) 0:05:20.740 ******* 2026-03-11 01:12:53.731005 | orchestrator | 2026-03-11 01:12:53.731011 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-11 01:12:53.731017 | orchestrator | Wednesday 11 March 2026 01:10:09 +0000 (0:00:00.123) 0:05:20.863 ******* 2026-03-11 01:12:53.731023 | orchestrator | 2026-03-11 01:12:53.731029 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-11 01:12:53.731035 | orchestrator | Wednesday 11 March 2026 01:10:09 +0000 (0:00:00.274) 0:05:21.138 ******* 2026-03-11 01:12:53.731041 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:53.731048 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:53.731054 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.731060 | orchestrator | 2026-03-11 01:12:53.731066 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-11 01:12:53.731072 | orchestrator | Wednesday 11 March 2026 01:10:19 +0000 (0:00:09.814) 0:05:30.953 ******* 2026-03-11 01:12:53.731078 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.731084 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:53.731090 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:53.731096 | orchestrator | 2026-03-11 01:12:53.731101 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-11 01:12:53.731107 | orchestrator | Wednesday 11 March 2026 01:10:35 +0000 (0:00:15.896) 0:05:46.849 ******* 2026-03-11 01:12:53.731113 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:53.731119 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:53.731126 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:53.731132 | orchestrator | 2026-03-11 01:12:53.731137 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-11 01:12:53.731143 | orchestrator | Wednesday 11 March 2026 01:10:54 +0000 (0:00:19.541) 0:06:06.390 ******* 2026-03-11 01:12:53.731149 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:53.731155 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:53.731161 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:53.731167 | orchestrator | 2026-03-11 01:12:53.731173 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-11 01:12:53.731182 | orchestrator | Wednesday 11 March 2026 01:11:21 +0000 (0:00:26.624) 0:06:33.015 ******* 2026-03-11 01:12:53.731189 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:53.731195 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:53.731201 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:53.731207 | orchestrator | 2026-03-11 01:12:53.731213 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-11 01:12:53.731218 | orchestrator | Wednesday 11 March 2026 01:11:21 +0000 (0:00:00.707) 0:06:33.722 ******* 2026-03-11 01:12:53.731224 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:53.731230 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:53.731235 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:53.731241 | orchestrator | 2026-03-11 01:12:53.731248 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-11 01:12:53.731254 | orchestrator | Wednesday 11 March 2026 01:11:22 +0000 (0:00:00.738) 0:06:34.460 ******* 2026-03-11 01:12:53.731260 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:53.731270 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:53.731276 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:53.731282 | orchestrator | 2026-03-11 01:12:53.731288 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-11 01:12:53.731326 | orchestrator | Wednesday 11 March 2026 01:11:45 +0000 (0:00:22.361) 0:06:56.822 ******* 2026-03-11 01:12:53.731335 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.731341 | orchestrator | 2026-03-11 01:12:53.731347 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-11 01:12:53.731353 | orchestrator | Wednesday 11 March 2026 01:11:45 +0000 (0:00:00.113) 0:06:56.935 ******* 2026-03-11 01:12:53.731359 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.731365 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.731371 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.731376 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.731382 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.731390 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-11 01:12:53.731396 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-11 01:12:53.731402 | orchestrator | 2026-03-11 01:12:53.731408 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-11 01:12:53.731414 | orchestrator | Wednesday 11 March 2026 01:12:06 +0000 (0:00:21.436) 0:07:18.372 ******* 2026-03-11 01:12:53.731420 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.731426 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.731432 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.731438 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.731444 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.731450 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.731456 | orchestrator | 2026-03-11 01:12:53.731462 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-11 01:12:53.731468 | orchestrator | Wednesday 11 March 2026 01:12:14 +0000 (0:00:07.602) 0:07:25.975 ******* 2026-03-11 01:12:53.731474 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.731480 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.731486 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.731493 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.731499 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.731507 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-03-11 01:12:53.731512 | orchestrator | 2026-03-11 01:12:53.731517 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-11 01:12:53.731522 | orchestrator | Wednesday 11 March 2026 01:12:17 +0000 (0:00:03.385) 0:07:29.360 ******* 2026-03-11 01:12:53.731526 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-11 01:12:53.731531 | orchestrator | 2026-03-11 01:12:53.731536 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-11 01:12:53.731541 | orchestrator | Wednesday 11 March 2026 01:12:30 +0000 (0:00:13.156) 0:07:42.516 ******* 2026-03-11 01:12:53.731546 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-11 01:12:53.731550 | orchestrator | 2026-03-11 01:12:53.731555 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-11 01:12:53.731560 | orchestrator | Wednesday 11 March 2026 01:12:32 +0000 (0:00:01.235) 0:07:43.751 ******* 2026-03-11 01:12:53.731565 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.731570 | orchestrator | 2026-03-11 01:12:53.731575 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-11 01:12:53.731579 | orchestrator | Wednesday 11 March 2026 01:12:33 +0000 (0:00:01.245) 0:07:44.997 ******* 2026-03-11 01:12:53.731584 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-11 01:12:53.731589 | orchestrator | 2026-03-11 01:12:53.731594 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-11 01:12:53.731603 | orchestrator | Wednesday 11 March 2026 01:12:44 +0000 (0:00:11.425) 0:07:56.423 ******* 2026-03-11 01:12:53.731608 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:12:53.731613 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:12:53.731617 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:12:53.731622 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:53.731627 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:12:53.731632 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:12:53.731637 | orchestrator | 2026-03-11 01:12:53.731642 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-11 01:12:53.731647 | orchestrator | 2026-03-11 01:12:53.731652 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-11 01:12:53.731658 | orchestrator | Wednesday 11 March 2026 01:12:46 +0000 (0:00:01.987) 0:07:58.410 ******* 2026-03-11 01:12:53.731663 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:53.731668 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:53.731673 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:53.731678 | orchestrator | 2026-03-11 01:12:53.731684 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-11 01:12:53.731689 | orchestrator | 2026-03-11 01:12:53.731694 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-11 01:12:53.731706 | orchestrator | Wednesday 11 March 2026 01:12:47 +0000 (0:00:01.192) 0:07:59.603 ******* 2026-03-11 01:12:53.731711 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.731717 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.731722 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.731727 | orchestrator | 2026-03-11 01:12:53.731732 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-11 01:12:53.731738 | orchestrator | 2026-03-11 01:12:53.731743 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-11 01:12:53.731748 | orchestrator | Wednesday 11 March 2026 01:12:48 +0000 (0:00:00.528) 0:08:00.131 ******* 2026-03-11 01:12:53.731754 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-11 01:12:53.731759 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-11 01:12:53.731765 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-11 01:12:53.731770 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-11 01:12:53.731774 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-11 01:12:53.731779 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-11 01:12:53.731784 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:53.731788 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-11 01:12:53.731793 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-11 01:12:53.731798 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-11 01:12:53.731802 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-11 01:12:53.731807 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-11 01:12:53.731812 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-11 01:12:53.731817 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:53.731822 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-11 01:12:53.731826 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-11 01:12:53.731831 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-11 01:12:53.731836 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-11 01:12:53.731841 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-11 01:12:53.731846 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-11 01:12:53.731851 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:53.731856 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-11 01:12:53.731867 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-11 01:12:53.731871 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-11 01:12:53.731876 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-11 01:12:53.731881 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-11 01:12:53.731885 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-11 01:12:53.731890 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.731895 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-11 01:12:53.731900 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-11 01:12:53.731910 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-11 01:12:53.731915 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-11 01:12:53.731920 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-11 01:12:53.731925 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-11 01:12:53.731929 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.731934 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-11 01:12:53.731938 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-11 01:12:53.731943 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-11 01:12:53.731948 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-11 01:12:53.731952 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-11 01:12:53.731957 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-11 01:12:53.731962 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.731966 | orchestrator | 2026-03-11 01:12:53.731972 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-11 01:12:53.731976 | orchestrator | 2026-03-11 01:12:53.731981 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-11 01:12:53.731986 | orchestrator | Wednesday 11 March 2026 01:12:49 +0000 (0:00:01.370) 0:08:01.501 ******* 2026-03-11 01:12:53.731991 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-11 01:12:53.731996 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-11 01:12:53.732001 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.732006 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-11 01:12:53.732011 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-11 01:12:53.732016 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.732020 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-11 01:12:53.732025 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-11 01:12:53.732030 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.732035 | orchestrator | 2026-03-11 01:12:53.732040 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-11 01:12:53.732044 | orchestrator | 2026-03-11 01:12:53.732049 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-11 01:12:53.732054 | orchestrator | Wednesday 11 March 2026 01:12:50 +0000 (0:00:00.772) 0:08:02.274 ******* 2026-03-11 01:12:53.732058 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.732063 | orchestrator | 2026-03-11 01:12:53.732075 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-11 01:12:53.732080 | orchestrator | 2026-03-11 01:12:53.732085 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-11 01:12:53.732090 | orchestrator | Wednesday 11 March 2026 01:12:51 +0000 (0:00:00.664) 0:08:02.939 ******* 2026-03-11 01:12:53.732095 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:53.732100 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:53.732105 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:53.732109 | orchestrator | 2026-03-11 01:12:53.732114 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:12:53.732124 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:12:53.732130 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-11 01:12:53.732135 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-11 01:12:53.732140 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-11 01:12:53.732145 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-11 01:12:53.732149 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-03-11 01:12:53.732154 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-11 01:12:53.732159 | orchestrator | 2026-03-11 01:12:53.732164 | orchestrator | 2026-03-11 01:12:53.732168 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:12:53.732173 | orchestrator | Wednesday 11 March 2026 01:12:51 +0000 (0:00:00.429) 0:08:03.368 ******* 2026-03-11 01:12:53.732178 | orchestrator | =============================================================================== 2026-03-11 01:12:53.732183 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.73s 2026-03-11 01:12:53.732188 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 26.62s 2026-03-11 01:12:53.732192 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.36s 2026-03-11 01:12:53.732197 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.44s 2026-03-11 01:12:53.732202 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.23s 2026-03-11 01:12:53.732207 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.00s 2026-03-11 01:12:53.732216 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.54s 2026-03-11 01:12:53.732221 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.22s 2026-03-11 01:12:53.732226 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 15.90s 2026-03-11 01:12:53.732232 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.51s 2026-03-11 01:12:53.732236 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.27s 2026-03-11 01:12:53.732241 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.16s 2026-03-11 01:12:53.732246 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.99s 2026-03-11 01:12:53.732250 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.78s 2026-03-11 01:12:53.732255 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.43s 2026-03-11 01:12:53.732260 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 9.81s 2026-03-11 01:12:53.732265 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.60s 2026-03-11 01:12:53.732269 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.42s 2026-03-11 01:12:53.732274 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 6.79s 2026-03-11 01:12:53.732279 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 6.76s 2026-03-11 01:12:53.732283 | orchestrator | 2026-03-11 01:12:53 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:53.732292 | orchestrator | 2026-03-11 01:12:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:56.762691 | orchestrator | 2026-03-11 01:12:56 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:56.762764 | orchestrator | 2026-03-11 01:12:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:59.798908 | orchestrator | 2026-03-11 01:12:59 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:12:59.799153 | orchestrator | 2026-03-11 01:12:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:13:02.851788 | orchestrator | 2026-03-11 01:13:02 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:13:02.851836 | orchestrator | 2026-03-11 01:13:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:13:05.898680 | orchestrator | 2026-03-11 01:13:05 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:13:05.898733 | orchestrator | 2026-03-11 01:13:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:13:08.951381 | orchestrator | 2026-03-11 01:13:08 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:13:08.951429 | orchestrator | 2026-03-11 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:13:11.995466 | orchestrator | 2026-03-11 01:13:11 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state STARTED 2026-03-11 01:13:11.995526 | orchestrator | 2026-03-11 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:13:15.062573 | orchestrator | 2026-03-11 01:13:15 | INFO  | Task 037dddb2-ddbf-41e0-b609-0eca838d7595 is in state SUCCESS 2026-03-11 01:13:15.064380 | orchestrator | 2026-03-11 01:13:15.064417 | orchestrator | 2026-03-11 01:13:15.064421 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:13:15.064425 | orchestrator | 2026-03-11 01:13:15.064428 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:13:15.064432 | orchestrator | Wednesday 11 March 2026 01:08:56 +0000 (0:00:00.256) 0:00:00.256 ******* 2026-03-11 01:13:15.064435 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:15.064439 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:13:15.064442 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:13:15.064445 | orchestrator | 2026-03-11 01:13:15.064449 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:13:15.064452 | orchestrator | Wednesday 11 March 2026 01:08:57 +0000 (0:00:00.307) 0:00:00.564 ******* 2026-03-11 01:13:15.064455 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-11 01:13:15.064459 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-11 01:13:15.064462 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-11 01:13:15.064465 | orchestrator | 2026-03-11 01:13:15.064468 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-11 01:13:15.064471 | orchestrator | 2026-03-11 01:13:15.064475 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-11 01:13:15.064478 | orchestrator | Wednesday 11 March 2026 01:08:57 +0000 (0:00:00.430) 0:00:00.994 ******* 2026-03-11 01:13:15.064481 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:13:15.064485 | orchestrator | 2026-03-11 01:13:15.064488 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-11 01:13:15.064491 | orchestrator | Wednesday 11 March 2026 01:08:58 +0000 (0:00:00.600) 0:00:01.595 ******* 2026-03-11 01:13:15.064495 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-11 01:13:15.064498 | orchestrator | 2026-03-11 01:13:15.064501 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-11 01:13:15.064515 | orchestrator | Wednesday 11 March 2026 01:09:01 +0000 (0:00:03.030) 0:00:04.626 ******* 2026-03-11 01:13:15.064518 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-11 01:13:15.064522 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-11 01:13:15.064525 | orchestrator | 2026-03-11 01:13:15.064528 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-11 01:13:15.064531 | orchestrator | Wednesday 11 March 2026 01:09:07 +0000 (0:00:06.432) 0:00:11.058 ******* 2026-03-11 01:13:15.064534 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:13:15.064537 | orchestrator | 2026-03-11 01:13:15.064541 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-11 01:13:15.064544 | orchestrator | Wednesday 11 March 2026 01:09:11 +0000 (0:00:03.440) 0:00:14.499 ******* 2026-03-11 01:13:15.064547 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:13:15.064550 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-11 01:13:15.064553 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-11 01:13:15.064556 | orchestrator | 2026-03-11 01:13:15.064589 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-11 01:13:15.064592 | orchestrator | Wednesday 11 March 2026 01:09:18 +0000 (0:00:07.165) 0:00:21.664 ******* 2026-03-11 01:13:15.064595 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:13:15.064599 | orchestrator | 2026-03-11 01:13:15.064602 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-11 01:13:15.064605 | orchestrator | Wednesday 11 March 2026 01:09:21 +0000 (0:00:03.074) 0:00:24.738 ******* 2026-03-11 01:13:15.064628 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-11 01:13:15.064632 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-11 01:13:15.064635 | orchestrator | 2026-03-11 01:13:15.064638 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-11 01:13:15.064641 | orchestrator | Wednesday 11 March 2026 01:09:28 +0000 (0:00:07.034) 0:00:31.773 ******* 2026-03-11 01:13:15.064644 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-11 01:13:15.064647 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-11 01:13:15.064651 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-11 01:13:15.064660 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-11 01:13:15.064663 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-11 01:13:15.064666 | orchestrator | 2026-03-11 01:13:15.064669 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-11 01:13:15.064673 | orchestrator | Wednesday 11 March 2026 01:09:44 +0000 (0:00:16.151) 0:00:47.924 ******* 2026-03-11 01:13:15.064676 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:13:15.064679 | orchestrator | 2026-03-11 01:13:15.064682 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-11 01:13:15.064685 | orchestrator | Wednesday 11 March 2026 01:09:45 +0000 (0:00:00.650) 0:00:48.575 ******* 2026-03-11 01:13:15.064736 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.064740 | orchestrator | 2026-03-11 01:13:15.064743 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-11 01:13:15.064750 | orchestrator | Wednesday 11 March 2026 01:09:49 +0000 (0:00:04.668) 0:00:53.243 ******* 2026-03-11 01:13:15.064753 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.064756 | orchestrator | 2026-03-11 01:13:15.064760 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-11 01:13:15.064771 | orchestrator | Wednesday 11 March 2026 01:09:54 +0000 (0:00:04.513) 0:00:57.757 ******* 2026-03-11 01:13:15.064776 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:15.064826 | orchestrator | 2026-03-11 01:13:15.064833 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-11 01:13:15.064839 | orchestrator | Wednesday 11 March 2026 01:09:58 +0000 (0:00:03.871) 0:01:01.628 ******* 2026-03-11 01:13:15.064844 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-11 01:13:15.064850 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-11 01:13:15.064855 | orchestrator | 2026-03-11 01:13:15.064860 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-11 01:13:15.064865 | orchestrator | Wednesday 11 March 2026 01:10:07 +0000 (0:00:09.213) 0:01:10.842 ******* 2026-03-11 01:13:15.064871 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-11 01:13:15.064876 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-11 01:13:15.064883 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-11 01:13:15.064888 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-11 01:13:15.064894 | orchestrator | 2026-03-11 01:13:15.064899 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-11 01:13:15.064906 | orchestrator | Wednesday 11 March 2026 01:10:21 +0000 (0:00:14.198) 0:01:25.041 ******* 2026-03-11 01:13:15.064909 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.064913 | orchestrator | 2026-03-11 01:13:15.064916 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-11 01:13:15.064919 | orchestrator | Wednesday 11 March 2026 01:10:26 +0000 (0:00:04.625) 0:01:29.667 ******* 2026-03-11 01:13:15.064922 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.064925 | orchestrator | 2026-03-11 01:13:15.064928 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-11 01:13:15.064932 | orchestrator | Wednesday 11 March 2026 01:10:30 +0000 (0:00:04.520) 0:01:34.188 ******* 2026-03-11 01:13:15.064935 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:13:15.064940 | orchestrator | 2026-03-11 01:13:15.065123 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-11 01:13:15.065138 | orchestrator | Wednesday 11 March 2026 01:10:31 +0000 (0:00:00.205) 0:01:34.393 ******* 2026-03-11 01:13:15.065143 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:15.065149 | orchestrator | 2026-03-11 01:13:15.065153 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-11 01:13:15.065159 | orchestrator | Wednesday 11 March 2026 01:10:34 +0000 (0:00:03.812) 0:01:38.206 ******* 2026-03-11 01:13:15.065164 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:13:15.065169 | orchestrator | 2026-03-11 01:13:15.065174 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-11 01:13:15.065179 | orchestrator | Wednesday 11 March 2026 01:10:35 +0000 (0:00:00.899) 0:01:39.105 ******* 2026-03-11 01:13:15.065184 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:15.065188 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.065193 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:15.065198 | orchestrator | 2026-03-11 01:13:15.065203 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-11 01:13:15.065208 | orchestrator | Wednesday 11 March 2026 01:10:40 +0000 (0:00:05.057) 0:01:44.162 ******* 2026-03-11 01:13:15.065213 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:15.065218 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:15.065223 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.065228 | orchestrator | 2026-03-11 01:13:15.065234 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-11 01:13:15.065246 | orchestrator | Wednesday 11 March 2026 01:10:44 +0000 (0:00:03.638) 0:01:47.801 ******* 2026-03-11 01:13:15.065251 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.065312 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:15.065316 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:15.065320 | orchestrator | 2026-03-11 01:13:15.065324 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-11 01:13:15.065328 | orchestrator | Wednesday 11 March 2026 01:10:45 +0000 (0:00:00.675) 0:01:48.476 ******* 2026-03-11 01:13:15.065332 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:13:15.065340 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:15.065344 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:13:15.065348 | orchestrator | 2026-03-11 01:13:15.065476 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-11 01:13:15.065482 | orchestrator | Wednesday 11 March 2026 01:10:46 +0000 (0:00:01.575) 0:01:50.052 ******* 2026-03-11 01:13:15.065485 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.065488 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:15.065491 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:15.065494 | orchestrator | 2026-03-11 01:13:15.065497 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-11 01:13:15.065500 | orchestrator | Wednesday 11 March 2026 01:10:47 +0000 (0:00:01.161) 0:01:51.213 ******* 2026-03-11 01:13:15.065503 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.065506 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:15.065510 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:15.065513 | orchestrator | 2026-03-11 01:13:15.065516 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-11 01:13:15.065560 | orchestrator | Wednesday 11 March 2026 01:10:48 +0000 (0:00:01.052) 0:01:52.266 ******* 2026-03-11 01:13:15.065568 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:15.065572 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:15.065577 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.065583 | orchestrator | 2026-03-11 01:13:15.065607 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-11 01:13:15.065613 | orchestrator | Wednesday 11 March 2026 01:10:50 +0000 (0:00:01.727) 0:01:53.993 ******* 2026-03-11 01:13:15.065618 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.065624 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:15.065627 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:15.065631 | orchestrator | 2026-03-11 01:13:15.065634 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-11 01:13:15.065637 | orchestrator | Wednesday 11 March 2026 01:10:52 +0000 (0:00:01.383) 0:01:55.377 ******* 2026-03-11 01:13:15.065641 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:15.065644 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:13:15.065647 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:13:15.065650 | orchestrator | 2026-03-11 01:13:15.065653 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-11 01:13:15.065657 | orchestrator | Wednesday 11 March 2026 01:10:52 +0000 (0:00:00.566) 0:01:55.944 ******* 2026-03-11 01:13:15.065660 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:13:15.065663 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:15.065666 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:13:15.065669 | orchestrator | 2026-03-11 01:13:15.065672 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-11 01:13:15.065675 | orchestrator | Wednesday 11 March 2026 01:10:54 +0000 (0:00:02.346) 0:01:58.290 ******* 2026-03-11 01:13:15.065679 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:13:15.065682 | orchestrator | 2026-03-11 01:13:15.065685 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-11 01:13:15.065688 | orchestrator | Wednesday 11 March 2026 01:10:55 +0000 (0:00:00.710) 0:01:59.000 ******* 2026-03-11 01:13:15.065692 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:15.065699 | orchestrator | 2026-03-11 01:13:15.065703 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-11 01:13:15.065706 | orchestrator | Wednesday 11 March 2026 01:10:58 +0000 (0:00:02.971) 0:02:01.971 ******* 2026-03-11 01:13:15.065709 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:15.065712 | orchestrator | 2026-03-11 01:13:15.065715 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-11 01:13:15.065718 | orchestrator | Wednesday 11 March 2026 01:11:01 +0000 (0:00:02.913) 0:02:04.885 ******* 2026-03-11 01:13:15.065721 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-11 01:13:15.065725 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-11 01:13:15.065728 | orchestrator | 2026-03-11 01:13:15.065731 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-11 01:13:15.065734 | orchestrator | Wednesday 11 March 2026 01:11:07 +0000 (0:00:05.979) 0:02:10.864 ******* 2026-03-11 01:13:15.065738 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:15.065741 | orchestrator | 2026-03-11 01:13:15.065744 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-11 01:13:15.065747 | orchestrator | Wednesday 11 March 2026 01:11:10 +0000 (0:00:02.772) 0:02:13.637 ******* 2026-03-11 01:13:15.065750 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:15.065753 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:13:15.065757 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:13:15.065760 | orchestrator | 2026-03-11 01:13:15.065777 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-11 01:13:15.065780 | orchestrator | Wednesday 11 March 2026 01:11:10 +0000 (0:00:00.322) 0:02:13.959 ******* 2026-03-11 01:13:15.065788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:15.065807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:15.065812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:15.065818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:15.065822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:15.065825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:15.065831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.065836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.065850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.065860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.065864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.065867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.065871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:15.065876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:15.065888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:15.065891 | orchestrator | 2026-03-11 01:13:15.065895 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-11 01:13:15.065901 | orchestrator | Wednesday 11 March 2026 01:11:12 +0000 (0:00:02.019) 0:02:15.978 ******* 2026-03-11 01:13:15.065904 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:13:15.065907 | orchestrator | 2026-03-11 01:13:15.065911 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-11 01:13:15.065914 | orchestrator | Wednesday 11 March 2026 01:11:12 +0000 (0:00:00.131) 0:02:16.110 ******* 2026-03-11 01:13:15.065917 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:13:15.065920 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:13:15.065923 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:13:15.065926 | orchestrator | 2026-03-11 01:13:15.065929 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-11 01:13:15.065932 | orchestrator | Wednesday 11 March 2026 01:11:13 +0000 (0:00:00.532) 0:02:16.643 ******* 2026-03-11 01:13:15.065936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:15.065939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:15.065943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.065948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.065951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:15.065957 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:13:15.065971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:15.065975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:15.065978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.065981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.065987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:15.065992 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:13:15.066005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:15.066011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:15.066036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.066039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.066043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:15.066046 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:13:15.066049 | orchestrator | 2026-03-11 01:13:15.066052 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-11 01:13:15.066056 | orchestrator | Wednesday 11 March 2026 01:11:13 +0000 (0:00:00.678) 0:02:17.321 ******* 2026-03-11 01:13:15.066059 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:13:15.066062 | orchestrator | 2026-03-11 01:13:15.066065 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-11 01:13:15.066068 | orchestrator | Wednesday 11 March 2026 01:11:14 +0000 (0:00:00.582) 0:02:17.904 ******* 2026-03-11 01:13:15.066074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:15.066090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:15.066094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:15.066097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:15.066101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:15.066106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:15.066112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066150 | orchestrator | 2026-03-11 01:13:15.066154 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-11 01:13:15.066157 | orchestrator | Wednesday 11 March 2026 01:11:19 +0000 (0:00:04.645) 0:02:22.550 ******* 2026-03-11 01:13:15.066160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:15.066163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:15.066167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.066174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.066179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:15.066183 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:13:15.066186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:15.066189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:15.066193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.066196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.066204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:15.066208 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:13:15.066214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:15.066219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:15.066225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.066232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.066274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:15.066280 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:13:15.066285 | orchestrator | 2026-03-11 01:13:15.066291 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-11 01:13:15.066296 | orchestrator | Wednesday 11 March 2026 01:11:19 +0000 (0:00:00.653) 0:02:23.203 ******* 2026-03-11 01:13:15.066305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:15.066314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:15.066320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.066325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.066331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:15.066340 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:13:15.066349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:15.066355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:15.066362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.066366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.066370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:15.066374 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:13:15.066378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:15.066385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:15.066390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.066398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:15.066402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:15.066406 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:13:15.066410 | orchestrator | 2026-03-11 01:13:15.066414 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-11 01:13:15.066417 | orchestrator | Wednesday 11 March 2026 01:11:20 +0000 (0:00:00.960) 0:02:24.163 ******* 2026-03-11 01:13:15.066421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:15.066428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:15.066433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:15.066439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:15.066444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:15.066448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:15.066454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066495 | orchestrator | 2026-03-11 01:13:15.066499 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-11 01:13:15.066503 | orchestrator | Wednesday 11 March 2026 01:11:25 +0000 (0:00:04.652) 0:02:28.816 ******* 2026-03-11 01:13:15.066506 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-11 01:13:15.066512 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-11 01:13:15.066516 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-11 01:13:15.066520 | orchestrator | 2026-03-11 01:13:15.066524 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-11 01:13:15.066527 | orchestrator | Wednesday 11 March 2026 01:11:27 +0000 (0:00:02.149) 0:02:30.965 ******* 2026-03-11 01:13:15.066534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:15.066538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:15.066544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:15.066548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:15.066556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:15.066560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:15.066566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066621 | orchestrator | 2026-03-11 01:13:15.066625 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-11 01:13:15.066629 | orchestrator | Wednesday 11 March 2026 01:11:42 +0000 (0:00:14.382) 0:02:45.348 ******* 2026-03-11 01:13:15.066632 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.066636 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:15.066640 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:15.066643 | orchestrator | 2026-03-11 01:13:15.066647 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-11 01:13:15.066651 | orchestrator | Wednesday 11 March 2026 01:11:43 +0000 (0:00:01.310) 0:02:46.659 ******* 2026-03-11 01:13:15.066654 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-11 01:13:15.066658 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-11 01:13:15.066662 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-11 01:13:15.066665 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-11 01:13:15.066669 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-11 01:13:15.066673 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-11 01:13:15.066676 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-11 01:13:15.066680 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-11 01:13:15.066684 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-11 01:13:15.066687 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-11 01:13:15.066691 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-11 01:13:15.066694 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-11 01:13:15.066698 | orchestrator | 2026-03-11 01:13:15.066702 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-11 01:13:15.066705 | orchestrator | Wednesday 11 March 2026 01:11:49 +0000 (0:00:05.681) 0:02:52.340 ******* 2026-03-11 01:13:15.066709 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-11 01:13:15.066713 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-11 01:13:15.066716 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-11 01:13:15.066720 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-11 01:13:15.066724 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-11 01:13:15.066727 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-11 01:13:15.066731 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-11 01:13:15.066734 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-11 01:13:15.066738 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-11 01:13:15.066744 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-11 01:13:15.066747 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-11 01:13:15.066751 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-11 01:13:15.066757 | orchestrator | 2026-03-11 01:13:15.066760 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-11 01:13:15.066764 | orchestrator | Wednesday 11 March 2026 01:11:53 +0000 (0:00:04.953) 0:02:57.293 ******* 2026-03-11 01:13:15.066768 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-11 01:13:15.066772 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-11 01:13:15.066775 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-11 01:13:15.066779 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-11 01:13:15.066783 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-11 01:13:15.066786 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-11 01:13:15.066790 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-11 01:13:15.066794 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-11 01:13:15.066799 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-11 01:13:15.066803 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-11 01:13:15.066806 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-11 01:13:15.066810 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-11 01:13:15.066814 | orchestrator | 2026-03-11 01:13:15.066817 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-11 01:13:15.066821 | orchestrator | Wednesday 11 March 2026 01:11:58 +0000 (0:00:04.359) 0:03:01.653 ******* 2026-03-11 01:13:15.066825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:15.066829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:15.066835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:15.066842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:15.066847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:15.066851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:15.066855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:15.066896 | orchestrator | 2026-03-11 01:13:15.066900 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-11 01:13:15.066907 | orchestrator | Wednesday 11 March 2026 01:12:01 +0000 (0:00:03.477) 0:03:05.131 ******* 2026-03-11 01:13:15.066910 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:13:15.066914 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:13:15.066918 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:13:15.066921 | orchestrator | 2026-03-11 01:13:15.066925 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-11 01:13:15.066928 | orchestrator | Wednesday 11 March 2026 01:12:02 +0000 (0:00:00.278) 0:03:05.410 ******* 2026-03-11 01:13:15.066932 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.066936 | orchestrator | 2026-03-11 01:13:15.066939 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-11 01:13:15.066943 | orchestrator | Wednesday 11 March 2026 01:12:03 +0000 (0:00:01.911) 0:03:07.321 ******* 2026-03-11 01:13:15.066947 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.066950 | orchestrator | 2026-03-11 01:13:15.066954 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-11 01:13:15.066957 | orchestrator | Wednesday 11 March 2026 01:12:06 +0000 (0:00:02.507) 0:03:09.828 ******* 2026-03-11 01:13:15.066961 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.066964 | orchestrator | 2026-03-11 01:13:15.066968 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-11 01:13:15.066981 | orchestrator | Wednesday 11 March 2026 01:12:09 +0000 (0:00:02.638) 0:03:12.467 ******* 2026-03-11 01:13:15.066985 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.066988 | orchestrator | 2026-03-11 01:13:15.066992 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-11 01:13:15.066995 | orchestrator | Wednesday 11 March 2026 01:12:12 +0000 (0:00:02.968) 0:03:15.436 ******* 2026-03-11 01:13:15.066999 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.067003 | orchestrator | 2026-03-11 01:13:15.067006 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-11 01:13:15.067010 | orchestrator | Wednesday 11 March 2026 01:12:29 +0000 (0:00:17.170) 0:03:32.606 ******* 2026-03-11 01:13:15.067014 | orchestrator | 2026-03-11 01:13:15.067017 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-11 01:13:15.067021 | orchestrator | Wednesday 11 March 2026 01:12:29 +0000 (0:00:00.069) 0:03:32.676 ******* 2026-03-11 01:13:15.067025 | orchestrator | 2026-03-11 01:13:15.067028 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-11 01:13:15.067032 | orchestrator | Wednesday 11 March 2026 01:12:29 +0000 (0:00:00.068) 0:03:32.744 ******* 2026-03-11 01:13:15.067035 | orchestrator | 2026-03-11 01:13:15.067039 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-11 01:13:15.067044 | orchestrator | Wednesday 11 March 2026 01:12:29 +0000 (0:00:00.073) 0:03:32.817 ******* 2026-03-11 01:13:15.067048 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.067052 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:15.067055 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:15.067059 | orchestrator | 2026-03-11 01:13:15.067063 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-11 01:13:15.067066 | orchestrator | Wednesday 11 March 2026 01:12:38 +0000 (0:00:09.444) 0:03:42.261 ******* 2026-03-11 01:13:15.067070 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.067074 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:15.067077 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:15.067081 | orchestrator | 2026-03-11 01:13:15.067085 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-11 01:13:15.067088 | orchestrator | Wednesday 11 March 2026 01:12:44 +0000 (0:00:05.765) 0:03:48.027 ******* 2026-03-11 01:13:15.067092 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.067096 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:15.067099 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:15.067103 | orchestrator | 2026-03-11 01:13:15.067107 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-11 01:13:15.067113 | orchestrator | Wednesday 11 March 2026 01:12:56 +0000 (0:00:11.477) 0:03:59.505 ******* 2026-03-11 01:13:15.067116 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.067120 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:15.067123 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:15.067127 | orchestrator | 2026-03-11 01:13:15.067130 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-11 01:13:15.067134 | orchestrator | Wednesday 11 March 2026 01:13:06 +0000 (0:00:10.496) 0:04:10.001 ******* 2026-03-11 01:13:15.067138 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:15.067141 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:15.067145 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:15.067148 | orchestrator | 2026-03-11 01:13:15.067152 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:13:15.067156 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:13:15.067160 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 01:13:15.067164 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 01:13:15.067168 | orchestrator | 2026-03-11 01:13:15.067171 | orchestrator | 2026-03-11 01:13:15.067175 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:13:15.067179 | orchestrator | Wednesday 11 March 2026 01:13:11 +0000 (0:00:05.091) 0:04:15.093 ******* 2026-03-11 01:13:15.067182 | orchestrator | =============================================================================== 2026-03-11 01:13:15.067186 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 17.17s 2026-03-11 01:13:15.067189 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.15s 2026-03-11 01:13:15.067193 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 14.38s 2026-03-11 01:13:15.067197 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.20s 2026-03-11 01:13:15.067200 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 11.48s 2026-03-11 01:13:15.067204 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.50s 2026-03-11 01:13:15.067208 | orchestrator | octavia : Restart octavia-api container --------------------------------- 9.44s 2026-03-11 01:13:15.067211 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.21s 2026-03-11 01:13:15.067215 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.17s 2026-03-11 01:13:15.067218 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.03s 2026-03-11 01:13:15.067222 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.43s 2026-03-11 01:13:15.067226 | orchestrator | octavia : Get security groups for octavia ------------------------------- 5.98s 2026-03-11 01:13:15.067229 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 5.77s 2026-03-11 01:13:15.067234 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.68s 2026-03-11 01:13:15.067238 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.09s 2026-03-11 01:13:15.067241 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.06s 2026-03-11 01:13:15.067245 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 4.95s 2026-03-11 01:13:15.067249 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 4.67s 2026-03-11 01:13:15.067252 | orchestrator | octavia : Copying over config.json files for services ------------------- 4.65s 2026-03-11 01:13:15.067268 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 4.65s 2026-03-11 01:13:15.067272 | orchestrator | 2026-03-11 01:13:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:18.110184 | orchestrator | 2026-03-11 01:13:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:21.145379 | orchestrator | 2026-03-11 01:13:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:24.184764 | orchestrator | 2026-03-11 01:13:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:27.221173 | orchestrator | 2026-03-11 01:13:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:30.256989 | orchestrator | 2026-03-11 01:13:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:33.292150 | orchestrator | 2026-03-11 01:13:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:36.329542 | orchestrator | 2026-03-11 01:13:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:39.369158 | orchestrator | 2026-03-11 01:13:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:42.404961 | orchestrator | 2026-03-11 01:13:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:45.436526 | orchestrator | 2026-03-11 01:13:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:48.481947 | orchestrator | 2026-03-11 01:13:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:51.524824 | orchestrator | 2026-03-11 01:13:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:54.564860 | orchestrator | 2026-03-11 01:13:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:57.602888 | orchestrator | 2026-03-11 01:13:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:14:00.638339 | orchestrator | 2026-03-11 01:14:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:14:03.672640 | orchestrator | 2026-03-11 01:14:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:14:06.713038 | orchestrator | 2026-03-11 01:14:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:14:09.752795 | orchestrator | 2026-03-11 01:14:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:14:12.792950 | orchestrator | 2026-03-11 01:14:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:14:15.827060 | orchestrator | 2026-03-11 01:14:16.033463 | orchestrator | 2026-03-11 01:14:16.039255 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Mar 11 01:14:16 UTC 2026 2026-03-11 01:14:16.039313 | orchestrator | 2026-03-11 01:14:16.440557 | orchestrator | ok: Runtime: 0:33:01.895072 2026-03-11 01:14:16.740783 | 2026-03-11 01:14:16.741521 | TASK [Bootstrap services] 2026-03-11 01:14:17.581211 | orchestrator | 2026-03-11 01:14:17.581357 | orchestrator | # BOOTSTRAP 2026-03-11 01:14:17.581367 | orchestrator | 2026-03-11 01:14:17.581372 | orchestrator | + set -e 2026-03-11 01:14:17.581377 | orchestrator | + echo 2026-03-11 01:14:17.581383 | orchestrator | + echo '# BOOTSTRAP' 2026-03-11 01:14:17.581390 | orchestrator | + echo 2026-03-11 01:14:17.581411 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-11 01:14:17.586857 | orchestrator | + set -e 2026-03-11 01:14:17.586942 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-11 01:14:21.743110 | orchestrator | 2026-03-11 01:14:21 | INFO  | It takes a moment until task 0cb10856-d64e-460d-a71a-1abfd68a1bd4 (flavor-manager) has been started and output is visible here. 2026-03-11 01:14:28.934512 | orchestrator | 2026-03-11 01:14:24 | INFO  | Flavor SCS-1L-1 created 2026-03-11 01:14:28.934605 | orchestrator | 2026-03-11 01:14:24 | INFO  | Flavor SCS-1L-1-5 created 2026-03-11 01:14:28.934616 | orchestrator | 2026-03-11 01:14:25 | INFO  | Flavor SCS-1V-2 created 2026-03-11 01:14:28.934622 | orchestrator | 2026-03-11 01:14:25 | INFO  | Flavor SCS-1V-2-5 created 2026-03-11 01:14:28.934633 | orchestrator | 2026-03-11 01:14:25 | INFO  | Flavor SCS-1V-4 created 2026-03-11 01:14:28.934642 | orchestrator | 2026-03-11 01:14:25 | INFO  | Flavor SCS-1V-4-10 created 2026-03-11 01:14:28.934648 | orchestrator | 2026-03-11 01:14:25 | INFO  | Flavor SCS-1V-8 created 2026-03-11 01:14:28.934655 | orchestrator | 2026-03-11 01:14:25 | INFO  | Flavor SCS-1V-8-20 created 2026-03-11 01:14:28.934674 | orchestrator | 2026-03-11 01:14:25 | INFO  | Flavor SCS-2V-4 created 2026-03-11 01:14:28.934681 | orchestrator | 2026-03-11 01:14:26 | INFO  | Flavor SCS-2V-4-10 created 2026-03-11 01:14:28.934687 | orchestrator | 2026-03-11 01:14:26 | INFO  | Flavor SCS-2V-8 created 2026-03-11 01:14:28.934694 | orchestrator | 2026-03-11 01:14:26 | INFO  | Flavor SCS-2V-8-20 created 2026-03-11 01:14:28.934700 | orchestrator | 2026-03-11 01:14:26 | INFO  | Flavor SCS-2V-16 created 2026-03-11 01:14:28.934706 | orchestrator | 2026-03-11 01:14:26 | INFO  | Flavor SCS-2V-16-50 created 2026-03-11 01:14:28.934713 | orchestrator | 2026-03-11 01:14:26 | INFO  | Flavor SCS-4V-8 created 2026-03-11 01:14:28.934720 | orchestrator | 2026-03-11 01:14:27 | INFO  | Flavor SCS-4V-8-20 created 2026-03-11 01:14:28.934727 | orchestrator | 2026-03-11 01:14:27 | INFO  | Flavor SCS-4V-16 created 2026-03-11 01:14:28.934734 | orchestrator | 2026-03-11 01:14:27 | INFO  | Flavor SCS-4V-16-50 created 2026-03-11 01:14:28.934740 | orchestrator | 2026-03-11 01:14:27 | INFO  | Flavor SCS-4V-32 created 2026-03-11 01:14:28.934747 | orchestrator | 2026-03-11 01:14:27 | INFO  | Flavor SCS-4V-32-100 created 2026-03-11 01:14:28.934754 | orchestrator | 2026-03-11 01:14:27 | INFO  | Flavor SCS-8V-16 created 2026-03-11 01:14:28.934761 | orchestrator | 2026-03-11 01:14:27 | INFO  | Flavor SCS-8V-16-50 created 2026-03-11 01:14:28.934767 | orchestrator | 2026-03-11 01:14:27 | INFO  | Flavor SCS-8V-32 created 2026-03-11 01:14:28.934774 | orchestrator | 2026-03-11 01:14:28 | INFO  | Flavor SCS-8V-32-100 created 2026-03-11 01:14:28.934780 | orchestrator | 2026-03-11 01:14:28 | INFO  | Flavor SCS-16V-32 created 2026-03-11 01:14:28.934786 | orchestrator | 2026-03-11 01:14:28 | INFO  | Flavor SCS-16V-32-100 created 2026-03-11 01:14:28.934793 | orchestrator | 2026-03-11 01:14:28 | INFO  | Flavor SCS-2V-4-20s created 2026-03-11 01:14:28.934800 | orchestrator | 2026-03-11 01:14:28 | INFO  | Flavor SCS-4V-8-50s created 2026-03-11 01:14:28.934807 | orchestrator | 2026-03-11 01:14:28 | INFO  | Flavor SCS-8V-32-100s created 2026-03-11 01:14:31.170882 | orchestrator | 2026-03-11 01:14:31 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-11 01:14:41.420836 | orchestrator | 2026-03-11 01:14:41 | INFO  | Task 8d4bd597-b657-41dc-b7d6-f2bddb2b3c11 (bootstrap-basic) was prepared for execution. 2026-03-11 01:14:41.420995 | orchestrator | 2026-03-11 01:14:41 | INFO  | It takes a moment until task 8d4bd597-b657-41dc-b7d6-f2bddb2b3c11 (bootstrap-basic) has been started and output is visible here. 2026-03-11 01:15:25.884186 | orchestrator | 2026-03-11 01:15:25.884258 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-11 01:15:25.884269 | orchestrator | 2026-03-11 01:15:25.884276 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 01:15:25.884283 | orchestrator | Wednesday 11 March 2026 01:14:45 +0000 (0:00:00.067) 0:00:00.067 ******* 2026-03-11 01:15:25.884289 | orchestrator | ok: [localhost] 2026-03-11 01:15:25.884297 | orchestrator | 2026-03-11 01:15:25.884303 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-11 01:15:25.884310 | orchestrator | Wednesday 11 March 2026 01:14:47 +0000 (0:00:01.654) 0:00:01.721 ******* 2026-03-11 01:15:25.884317 | orchestrator | ok: [localhost] 2026-03-11 01:15:25.884323 | orchestrator | 2026-03-11 01:15:25.884330 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-11 01:15:25.884336 | orchestrator | Wednesday 11 March 2026 01:14:56 +0000 (0:00:09.058) 0:00:10.780 ******* 2026-03-11 01:15:25.884343 | orchestrator | changed: [localhost] 2026-03-11 01:15:25.884350 | orchestrator | 2026-03-11 01:15:25.884357 | orchestrator | TASK [Create public network] *************************************************** 2026-03-11 01:15:25.884363 | orchestrator | Wednesday 11 March 2026 01:15:03 +0000 (0:00:07.090) 0:00:17.870 ******* 2026-03-11 01:15:25.884370 | orchestrator | changed: [localhost] 2026-03-11 01:15:25.884377 | orchestrator | 2026-03-11 01:15:25.884383 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-11 01:15:25.884390 | orchestrator | Wednesday 11 March 2026 01:15:08 +0000 (0:00:04.780) 0:00:22.650 ******* 2026-03-11 01:15:25.884399 | orchestrator | changed: [localhost] 2026-03-11 01:15:25.884405 | orchestrator | 2026-03-11 01:15:25.884412 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-11 01:15:25.884418 | orchestrator | Wednesday 11 March 2026 01:15:14 +0000 (0:00:06.330) 0:00:28.981 ******* 2026-03-11 01:15:25.884424 | orchestrator | changed: [localhost] 2026-03-11 01:15:25.884431 | orchestrator | 2026-03-11 01:15:25.884437 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-11 01:15:25.884443 | orchestrator | Wednesday 11 March 2026 01:15:18 +0000 (0:00:04.129) 0:00:33.111 ******* 2026-03-11 01:15:25.884450 | orchestrator | changed: [localhost] 2026-03-11 01:15:25.884457 | orchestrator | 2026-03-11 01:15:25.884463 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-11 01:15:25.884478 | orchestrator | Wednesday 11 March 2026 01:15:22 +0000 (0:00:03.653) 0:00:36.764 ******* 2026-03-11 01:15:25.884485 | orchestrator | ok: [localhost] 2026-03-11 01:15:25.884492 | orchestrator | 2026-03-11 01:15:25.884499 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:15:25.884506 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:15:25.884513 | orchestrator | 2026-03-11 01:15:25.884519 | orchestrator | 2026-03-11 01:15:25.884526 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:15:25.884534 | orchestrator | Wednesday 11 March 2026 01:15:25 +0000 (0:00:03.398) 0:00:40.163 ******* 2026-03-11 01:15:25.884540 | orchestrator | =============================================================================== 2026-03-11 01:15:25.884547 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.06s 2026-03-11 01:15:25.884554 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.09s 2026-03-11 01:15:25.884560 | orchestrator | Set public network to default ------------------------------------------- 6.33s 2026-03-11 01:15:25.884567 | orchestrator | Create public network --------------------------------------------------- 4.78s 2026-03-11 01:15:25.884589 | orchestrator | Create public subnet ---------------------------------------------------- 4.13s 2026-03-11 01:15:25.884596 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.65s 2026-03-11 01:15:25.884602 | orchestrator | Create manager role ----------------------------------------------------- 3.40s 2026-03-11 01:15:25.884608 | orchestrator | Gathering Facts --------------------------------------------------------- 1.65s 2026-03-11 01:15:28.228811 | orchestrator | 2026-03-11 01:15:28 | INFO  | It takes a moment until task 8eaeacf9-12aa-4be7-acb3-c8f86da23d83 (image-manager) has been started and output is visible here. 2026-03-11 01:16:09.585816 | orchestrator | 2026-03-11 01:15:30 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-11 01:16:09.585937 | orchestrator | 2026-03-11 01:15:31 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-11 01:16:09.585959 | orchestrator | 2026-03-11 01:15:31 | INFO  | Importing image Cirros 0.6.2 2026-03-11 01:16:09.585973 | orchestrator | 2026-03-11 01:15:31 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-11 01:16:09.585987 | orchestrator | 2026-03-11 01:15:33 | INFO  | Waiting for image to leave queued state... 2026-03-11 01:16:09.586002 | orchestrator | 2026-03-11 01:15:35 | INFO  | Waiting for import to complete... 2026-03-11 01:16:09.586087 | orchestrator | 2026-03-11 01:15:45 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-11 01:16:09.586104 | orchestrator | 2026-03-11 01:15:45 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-11 01:16:09.586118 | orchestrator | 2026-03-11 01:15:45 | INFO  | Setting internal_version = 0.6.2 2026-03-11 01:16:09.586132 | orchestrator | 2026-03-11 01:15:45 | INFO  | Setting image_original_user = cirros 2026-03-11 01:16:09.586145 | orchestrator | 2026-03-11 01:15:45 | INFO  | Adding tag os:cirros 2026-03-11 01:16:09.586159 | orchestrator | 2026-03-11 01:15:45 | INFO  | Setting property architecture: x86_64 2026-03-11 01:16:09.586173 | orchestrator | 2026-03-11 01:15:46 | INFO  | Setting property hw_disk_bus: scsi 2026-03-11 01:16:09.586186 | orchestrator | 2026-03-11 01:15:46 | INFO  | Setting property hw_rng_model: virtio 2026-03-11 01:16:09.586199 | orchestrator | 2026-03-11 01:15:46 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-11 01:16:09.586211 | orchestrator | 2026-03-11 01:15:46 | INFO  | Setting property hw_watchdog_action: reset 2026-03-11 01:16:09.586224 | orchestrator | 2026-03-11 01:15:46 | INFO  | Setting property hypervisor_type: qemu 2026-03-11 01:16:09.586237 | orchestrator | 2026-03-11 01:15:47 | INFO  | Setting property os_distro: cirros 2026-03-11 01:16:09.586249 | orchestrator | 2026-03-11 01:15:47 | INFO  | Setting property os_purpose: minimal 2026-03-11 01:16:09.586404 | orchestrator | 2026-03-11 01:15:47 | INFO  | Setting property replace_frequency: never 2026-03-11 01:16:09.586421 | orchestrator | 2026-03-11 01:15:47 | INFO  | Setting property uuid_validity: none 2026-03-11 01:16:09.586435 | orchestrator | 2026-03-11 01:15:47 | INFO  | Setting property provided_until: none 2026-03-11 01:16:09.586449 | orchestrator | 2026-03-11 01:15:48 | INFO  | Setting property image_description: Cirros 2026-03-11 01:16:09.586463 | orchestrator | 2026-03-11 01:15:48 | INFO  | Setting property image_name: Cirros 2026-03-11 01:16:09.586478 | orchestrator | 2026-03-11 01:15:48 | INFO  | Setting property internal_version: 0.6.2 2026-03-11 01:16:09.586493 | orchestrator | 2026-03-11 01:15:48 | INFO  | Setting property image_original_user: cirros 2026-03-11 01:16:09.586530 | orchestrator | 2026-03-11 01:15:48 | INFO  | Setting property os_version: 0.6.2 2026-03-11 01:16:09.586547 | orchestrator | 2026-03-11 01:15:48 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-11 01:16:09.586557 | orchestrator | 2026-03-11 01:15:49 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-11 01:16:09.586565 | orchestrator | 2026-03-11 01:15:49 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-11 01:16:09.586623 | orchestrator | 2026-03-11 01:15:49 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-11 01:16:09.586632 | orchestrator | 2026-03-11 01:15:49 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-11 01:16:09.586641 | orchestrator | 2026-03-11 01:15:49 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-11 01:16:09.586655 | orchestrator | 2026-03-11 01:15:49 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-11 01:16:09.586664 | orchestrator | 2026-03-11 01:15:49 | INFO  | Importing image Cirros 0.6.3 2026-03-11 01:16:09.586740 | orchestrator | 2026-03-11 01:15:49 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-11 01:16:09.586749 | orchestrator | 2026-03-11 01:15:51 | INFO  | Waiting for image to leave queued state... 2026-03-11 01:16:09.586757 | orchestrator | 2026-03-11 01:15:53 | INFO  | Waiting for import to complete... 2026-03-11 01:16:09.586782 | orchestrator | 2026-03-11 01:16:03 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-11 01:16:09.586789 | orchestrator | 2026-03-11 01:16:04 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-11 01:16:09.586797 | orchestrator | 2026-03-11 01:16:04 | INFO  | Setting internal_version = 0.6.3 2026-03-11 01:16:09.586805 | orchestrator | 2026-03-11 01:16:04 | INFO  | Setting image_original_user = cirros 2026-03-11 01:16:09.586818 | orchestrator | 2026-03-11 01:16:04 | INFO  | Adding tag os:cirros 2026-03-11 01:16:09.586829 | orchestrator | 2026-03-11 01:16:04 | INFO  | Setting property architecture: x86_64 2026-03-11 01:16:09.586841 | orchestrator | 2026-03-11 01:16:04 | INFO  | Setting property hw_disk_bus: scsi 2026-03-11 01:16:09.586853 | orchestrator | 2026-03-11 01:16:04 | INFO  | Setting property hw_rng_model: virtio 2026-03-11 01:16:09.586865 | orchestrator | 2026-03-11 01:16:04 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-11 01:16:09.586877 | orchestrator | 2026-03-11 01:16:05 | INFO  | Setting property hw_watchdog_action: reset 2026-03-11 01:16:09.586889 | orchestrator | 2026-03-11 01:16:05 | INFO  | Setting property hypervisor_type: qemu 2026-03-11 01:16:09.586902 | orchestrator | 2026-03-11 01:16:05 | INFO  | Setting property os_distro: cirros 2026-03-11 01:16:09.586944 | orchestrator | 2026-03-11 01:16:06 | INFO  | Setting property os_purpose: minimal 2026-03-11 01:16:09.586958 | orchestrator | 2026-03-11 01:16:06 | INFO  | Setting property replace_frequency: never 2026-03-11 01:16:09.586971 | orchestrator | 2026-03-11 01:16:06 | INFO  | Setting property uuid_validity: none 2026-03-11 01:16:09.586984 | orchestrator | 2026-03-11 01:16:06 | INFO  | Setting property provided_until: none 2026-03-11 01:16:09.586995 | orchestrator | 2026-03-11 01:16:07 | INFO  | Setting property image_description: Cirros 2026-03-11 01:16:09.587007 | orchestrator | 2026-03-11 01:16:07 | INFO  | Setting property image_name: Cirros 2026-03-11 01:16:09.587019 | orchestrator | 2026-03-11 01:16:07 | INFO  | Setting property internal_version: 0.6.3 2026-03-11 01:16:09.587043 | orchestrator | 2026-03-11 01:16:07 | INFO  | Setting property image_original_user: cirros 2026-03-11 01:16:09.587051 | orchestrator | 2026-03-11 01:16:07 | INFO  | Setting property os_version: 0.6.3 2026-03-11 01:16:09.587058 | orchestrator | 2026-03-11 01:16:08 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-11 01:16:09.587065 | orchestrator | 2026-03-11 01:16:08 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-11 01:16:09.587072 | orchestrator | 2026-03-11 01:16:08 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-11 01:16:09.587079 | orchestrator | 2026-03-11 01:16:08 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-11 01:16:09.587086 | orchestrator | 2026-03-11 01:16:08 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-11 01:16:09.916464 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-11 01:16:12.206440 | orchestrator | 2026-03-11 01:16:12 | INFO  | date: 2026-03-10 2026-03-11 01:16:12.206519 | orchestrator | 2026-03-11 01:16:12 | INFO  | image: octavia-amphora-haproxy-2024.2.20260310.qcow2 2026-03-11 01:16:12.206574 | orchestrator | 2026-03-11 01:16:12 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260310.qcow2 2026-03-11 01:16:12.206592 | orchestrator | 2026-03-11 01:16:12 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260310.qcow2.CHECKSUM 2026-03-11 01:16:12.363063 | orchestrator | 2026-03-11 01:16:12 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/f4dbef49419b430cbfedd1f7a77edb21/work/logs" 2026-03-11 01:16:45.762558 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f4dbef49419b430cbfedd1f7a77edb21/work/artifacts" 2026-03-11 01:16:46.041441 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f4dbef49419b430cbfedd1f7a77edb21/work/docs" 2026-03-11 01:16:46.069460 | 2026-03-11 01:16:46.069629 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-11 01:16:47.020144 | orchestrator | changed: .d..t...... ./ 2026-03-11 01:16:47.020452 | orchestrator | changed: All items complete 2026-03-11 01:16:47.020497 | 2026-03-11 01:16:47.705183 | orchestrator | changed: .d..t...... ./ 2026-03-11 01:16:48.418344 | orchestrator | changed: .d..t...... ./ 2026-03-11 01:16:48.443547 | 2026-03-11 01:16:48.443681 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-11 01:16:48.477511 | orchestrator | skipping: Conditional result was False 2026-03-11 01:16:48.483747 | orchestrator | skipping: Conditional result was False 2026-03-11 01:16:48.509845 | 2026-03-11 01:16:48.509962 | PLAY RECAP 2026-03-11 01:16:48.510060 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-11 01:16:48.510091 | 2026-03-11 01:16:48.651747 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-11 01:16:48.652942 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-11 01:16:49.436525 | 2026-03-11 01:16:49.436684 | PLAY [Base post] 2026-03-11 01:16:49.451397 | 2026-03-11 01:16:49.451536 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-11 01:16:50.854462 | orchestrator | changed 2026-03-11 01:16:50.861604 | 2026-03-11 01:16:50.861718 | PLAY RECAP 2026-03-11 01:16:50.861781 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-11 01:16:50.861844 | 2026-03-11 01:16:50.990965 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-11 01:16:50.994155 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-11 01:16:51.816335 | 2026-03-11 01:16:51.816520 | PLAY [Base post-logs] 2026-03-11 01:16:51.827719 | 2026-03-11 01:16:51.827879 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-11 01:16:52.300569 | localhost | changed 2026-03-11 01:16:52.311165 | 2026-03-11 01:16:52.311332 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-11 01:16:52.350152 | localhost | ok 2026-03-11 01:16:52.356736 | 2026-03-11 01:16:52.356900 | TASK [Set zuul-log-path fact] 2026-03-11 01:16:52.386118 | localhost | ok 2026-03-11 01:16:52.401892 | 2026-03-11 01:16:52.402119 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-11 01:16:52.441106 | localhost | ok 2026-03-11 01:16:52.448666 | 2026-03-11 01:16:52.448854 | TASK [upload-logs : Create log directories] 2026-03-11 01:16:53.021613 | localhost | changed 2026-03-11 01:16:53.027148 | 2026-03-11 01:16:53.027329 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-11 01:16:53.562224 | localhost -> localhost | ok: Runtime: 0:00:00.007179 2026-03-11 01:16:53.572280 | 2026-03-11 01:16:53.572512 | TASK [upload-logs : Upload logs to log server] 2026-03-11 01:16:54.147350 | localhost | Output suppressed because no_log was given 2026-03-11 01:16:54.149508 | 2026-03-11 01:16:54.149613 | LOOP [upload-logs : Compress console log and json output] 2026-03-11 01:16:54.209468 | localhost | skipping: Conditional result was False 2026-03-11 01:16:54.214401 | localhost | skipping: Conditional result was False 2026-03-11 01:16:54.220449 | 2026-03-11 01:16:54.220604 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-11 01:16:54.283672 | localhost | skipping: Conditional result was False 2026-03-11 01:16:54.284321 | 2026-03-11 01:16:54.295715 | localhost | skipping: Conditional result was False 2026-03-11 01:16:54.301914 | 2026-03-11 01:16:54.302155 | LOOP [upload-logs : Upload console log and json output]